00:00:00.000 Started by upstream project "autotest-per-patch" build number 127083 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.026 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.027 The recommended git tool is: git 00:00:00.027 using credential 00000000-0000-0000-0000-000000000002 00:00:00.029 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.059 Fetching changes from the remote Git repository 00:00:00.062 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.093 Using shallow fetch with depth 1 00:00:00.093 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.093 > git --version # timeout=10 00:00:00.127 > git --version # 'git version 2.39.2' 00:00:00.127 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.159 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.159 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.046 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.057 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.069 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:04.069 > git config core.sparsecheckout # timeout=10 00:00:04.080 > git read-tree -mu HEAD # timeout=10 00:00:04.098 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:04.132 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:04.133 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:04.218 [Pipeline] Start of Pipeline 00:00:04.229 [Pipeline] library 00:00:04.230 Loading library shm_lib@master 00:00:04.231 Library shm_lib@master is cached. Copying from home. 00:00:04.247 [Pipeline] node 00:00:04.256 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.258 [Pipeline] { 00:00:04.266 [Pipeline] catchError 00:00:04.268 [Pipeline] { 00:00:04.279 [Pipeline] wrap 00:00:04.289 [Pipeline] { 00:00:04.294 [Pipeline] stage 00:00:04.295 [Pipeline] { (Prologue) 00:00:04.450 [Pipeline] sh 00:00:04.731 + logger -p user.info -t JENKINS-CI 00:00:04.748 [Pipeline] echo 00:00:04.749 Node: GP6 00:00:04.759 [Pipeline] sh 00:00:05.054 [Pipeline] setCustomBuildProperty 00:00:05.068 [Pipeline] echo 00:00:05.070 Cleanup processes 00:00:05.074 [Pipeline] sh 00:00:05.347 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.347 2938384 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.361 [Pipeline] sh 00:00:05.642 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.643 ++ grep -v 'sudo pgrep' 00:00:05.643 ++ awk '{print $1}' 00:00:05.643 + sudo kill -9 00:00:05.643 + true 00:00:05.657 [Pipeline] cleanWs 00:00:05.666 [WS-CLEANUP] Deleting project workspace... 00:00:05.666 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.672 [WS-CLEANUP] done 00:00:05.677 [Pipeline] setCustomBuildProperty 00:00:05.695 [Pipeline] sh 00:00:05.976 + sudo git config --global --replace-all safe.directory '*' 00:00:06.041 [Pipeline] httpRequest 00:00:06.057 [Pipeline] echo 00:00:06.058 Sorcerer 10.211.164.101 is alive 00:00:06.065 [Pipeline] httpRequest 00:00:06.070 HttpMethod: GET 00:00:06.070 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:06.070 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:06.085 Response Code: HTTP/1.1 200 OK 00:00:06.086 Success: Status code 200 is in the accepted range: 200,404 00:00:06.086 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:11.002 [Pipeline] sh 00:00:11.282 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:11.300 [Pipeline] httpRequest 00:00:11.325 [Pipeline] echo 00:00:11.327 Sorcerer 10.211.164.101 is alive 00:00:11.336 [Pipeline] httpRequest 00:00:11.340 HttpMethod: GET 00:00:11.341 URL: http://10.211.164.101/packages/spdk_0a6bb28fae8e79dddddab1d3679650366dfe87d7.tar.gz 00:00:11.342 Sending request to url: http://10.211.164.101/packages/spdk_0a6bb28fae8e79dddddab1d3679650366dfe87d7.tar.gz 00:00:11.343 Response Code: HTTP/1.1 200 OK 00:00:11.344 Success: Status code 200 is in the accepted range: 200,404 00:00:11.344 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_0a6bb28fae8e79dddddab1d3679650366dfe87d7.tar.gz 00:00:26.919 [Pipeline] sh 00:00:27.200 + tar --no-same-owner -xf spdk_0a6bb28fae8e79dddddab1d3679650366dfe87d7.tar.gz 00:00:30.492 [Pipeline] sh 00:00:30.771 + git -C spdk log --oneline -n5 00:00:30.771 0a6bb28fa test/accel/dif: add DIX Generate/Verify suites 00:00:30.771 52c295e65 lib/accel: add DIX verify 00:00:30.771 b5c6fc4f3 lib/accel: add DIX generate 00:00:30.771 8ee2672c4 test/bdev: Add test for resized RAID with superblock 00:00:30.771 19f5787c8 raid: skip configured base bdevs in sb examine 00:00:30.782 [Pipeline] } 00:00:30.801 [Pipeline] // stage 00:00:30.811 [Pipeline] stage 00:00:30.813 [Pipeline] { (Prepare) 00:00:30.833 [Pipeline] writeFile 00:00:30.851 [Pipeline] sh 00:00:31.131 + logger -p user.info -t JENKINS-CI 00:00:31.143 [Pipeline] sh 00:00:31.422 + logger -p user.info -t JENKINS-CI 00:00:31.436 [Pipeline] sh 00:00:31.714 + cat autorun-spdk.conf 00:00:31.714 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.714 SPDK_TEST_NVMF=1 00:00:31.714 SPDK_TEST_NVME_CLI=1 00:00:31.714 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.714 SPDK_TEST_NVMF_NICS=e810 00:00:31.714 SPDK_TEST_VFIOUSER=1 00:00:31.714 SPDK_RUN_UBSAN=1 00:00:31.714 NET_TYPE=phy 00:00:31.721 RUN_NIGHTLY=0 00:00:31.727 [Pipeline] readFile 00:00:31.757 [Pipeline] withEnv 00:00:31.759 [Pipeline] { 00:00:31.776 [Pipeline] sh 00:00:32.058 + set -ex 00:00:32.058 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:32.058 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:32.058 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.058 ++ SPDK_TEST_NVMF=1 00:00:32.058 ++ SPDK_TEST_NVME_CLI=1 00:00:32.058 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:32.058 ++ SPDK_TEST_NVMF_NICS=e810 00:00:32.058 ++ SPDK_TEST_VFIOUSER=1 00:00:32.058 ++ SPDK_RUN_UBSAN=1 00:00:32.058 ++ NET_TYPE=phy 00:00:32.058 ++ RUN_NIGHTLY=0 00:00:32.058 + case $SPDK_TEST_NVMF_NICS in 00:00:32.058 + DRIVERS=ice 00:00:32.058 + [[ tcp == \r\d\m\a ]] 00:00:32.058 + [[ -n ice ]] 00:00:32.058 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:32.058 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:32.058 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:32.058 rmmod: ERROR: Module irdma is not currently loaded 00:00:32.058 rmmod: ERROR: Module i40iw is not currently loaded 00:00:32.058 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:32.058 + true 00:00:32.058 + for D in $DRIVERS 00:00:32.058 + sudo modprobe ice 00:00:32.058 + exit 0 00:00:32.067 [Pipeline] } 00:00:32.087 [Pipeline] // withEnv 00:00:32.093 [Pipeline] } 00:00:32.110 [Pipeline] // stage 00:00:32.122 [Pipeline] catchError 00:00:32.124 [Pipeline] { 00:00:32.141 [Pipeline] timeout 00:00:32.141 Timeout set to expire in 50 min 00:00:32.143 [Pipeline] { 00:00:32.160 [Pipeline] stage 00:00:32.162 [Pipeline] { (Tests) 00:00:32.179 [Pipeline] sh 00:00:32.459 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:32.460 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:32.460 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:32.460 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:32.460 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:32.460 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:32.460 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:32.460 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:32.460 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:32.460 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:32.460 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:32.460 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:32.460 + source /etc/os-release 00:00:32.460 ++ NAME='Fedora Linux' 00:00:32.460 ++ VERSION='38 (Cloud Edition)' 00:00:32.460 ++ ID=fedora 00:00:32.460 ++ VERSION_ID=38 00:00:32.460 ++ VERSION_CODENAME= 00:00:32.460 ++ PLATFORM_ID=platform:f38 00:00:32.460 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:32.460 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:32.460 ++ LOGO=fedora-logo-icon 00:00:32.460 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:32.460 ++ HOME_URL=https://fedoraproject.org/ 00:00:32.460 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:32.460 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:32.460 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:32.460 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:32.460 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:32.460 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:32.460 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:32.460 ++ SUPPORT_END=2024-05-14 00:00:32.460 ++ VARIANT='Cloud Edition' 00:00:32.460 ++ VARIANT_ID=cloud 00:00:32.460 + uname -a 00:00:32.460 Linux spdk-gp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:32.460 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:33.395 Hugepages 00:00:33.395 node hugesize free / total 00:00:33.395 node0 1048576kB 0 / 0 00:00:33.395 node0 2048kB 0 / 0 00:00:33.395 node1 1048576kB 0 / 0 00:00:33.395 node1 2048kB 0 / 0 00:00:33.395 00:00:33.395 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:33.395 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:33.395 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:33.395 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:33.395 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:33.395 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:33.395 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:33.395 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:33.395 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:33.395 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:33.395 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:33.395 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:33.395 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:33.395 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:33.395 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:33.395 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:33.395 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:33.395 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:33.395 + rm -f /tmp/spdk-ld-path 00:00:33.395 + source autorun-spdk.conf 00:00:33.395 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.395 ++ SPDK_TEST_NVMF=1 00:00:33.395 ++ SPDK_TEST_NVME_CLI=1 00:00:33.395 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:33.395 ++ SPDK_TEST_NVMF_NICS=e810 00:00:33.395 ++ SPDK_TEST_VFIOUSER=1 00:00:33.395 ++ SPDK_RUN_UBSAN=1 00:00:33.395 ++ NET_TYPE=phy 00:00:33.395 ++ RUN_NIGHTLY=0 00:00:33.395 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:33.395 + [[ -n '' ]] 00:00:33.395 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:33.395 + for M in /var/spdk/build-*-manifest.txt 00:00:33.395 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:33.395 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:33.395 + for M in /var/spdk/build-*-manifest.txt 00:00:33.395 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:33.395 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:33.395 ++ uname 00:00:33.395 + [[ Linux == \L\i\n\u\x ]] 00:00:33.395 + sudo dmesg -T 00:00:33.654 + sudo dmesg --clear 00:00:33.654 + dmesg_pid=2939058 00:00:33.654 + [[ Fedora Linux == FreeBSD ]] 00:00:33.654 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:33.654 + sudo dmesg -Tw 00:00:33.654 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:33.654 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:33.654 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:33.654 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:33.654 + [[ -x /usr/src/fio-static/fio ]] 00:00:33.654 + export FIO_BIN=/usr/src/fio-static/fio 00:00:33.654 + FIO_BIN=/usr/src/fio-static/fio 00:00:33.654 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:33.654 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:33.654 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:33.654 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:33.654 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:33.654 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:33.654 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:33.654 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:33.654 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:33.654 Test configuration: 00:00:33.654 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.654 SPDK_TEST_NVMF=1 00:00:33.654 SPDK_TEST_NVME_CLI=1 00:00:33.654 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:33.654 SPDK_TEST_NVMF_NICS=e810 00:00:33.654 SPDK_TEST_VFIOUSER=1 00:00:33.654 SPDK_RUN_UBSAN=1 00:00:33.654 NET_TYPE=phy 00:00:33.654 RUN_NIGHTLY=0 18:43:11 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:33.654 18:43:11 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:33.654 18:43:11 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:33.654 18:43:11 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:33.654 18:43:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.654 18:43:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.654 18:43:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.654 18:43:11 -- paths/export.sh@5 -- $ export PATH 00:00:33.654 18:43:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.654 18:43:11 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:33.654 18:43:11 -- common/autobuild_common.sh@447 -- $ date +%s 00:00:33.654 18:43:11 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721839391.XXXXXX 00:00:33.654 18:43:11 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721839391.EPbNho 00:00:33.654 18:43:11 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:00:33.654 18:43:11 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:00:33.654 18:43:11 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:33.654 18:43:11 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:33.654 18:43:11 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:33.654 18:43:11 -- common/autobuild_common.sh@463 -- $ get_config_params 00:00:33.654 18:43:11 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:00:33.654 18:43:11 -- common/autotest_common.sh@10 -- $ set +x 00:00:33.654 18:43:11 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:33.654 18:43:11 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:00:33.654 18:43:11 -- pm/common@17 -- $ local monitor 00:00:33.654 18:43:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.654 18:43:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.654 18:43:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.654 18:43:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.654 18:43:11 -- pm/common@21 -- $ date +%s 00:00:33.654 18:43:11 -- pm/common@21 -- $ date +%s 00:00:33.654 18:43:11 -- pm/common@25 -- $ sleep 1 00:00:33.654 18:43:11 -- pm/common@21 -- $ date +%s 00:00:33.654 18:43:11 -- pm/common@21 -- $ date +%s 00:00:33.654 18:43:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721839391 00:00:33.654 18:43:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721839391 00:00:33.654 18:43:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721839391 00:00:33.654 18:43:11 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721839391 00:00:33.654 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721839391_collect-vmstat.pm.log 00:00:33.654 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721839391_collect-cpu-load.pm.log 00:00:33.654 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721839391_collect-cpu-temp.pm.log 00:00:33.654 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721839391_collect-bmc-pm.bmc.pm.log 00:00:34.589 18:43:12 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:00:34.589 18:43:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:34.589 18:43:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:34.589 18:43:12 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:34.589 18:43:12 -- spdk/autobuild.sh@16 -- $ date -u 00:00:34.589 Wed Jul 24 04:43:12 PM UTC 2024 00:00:34.589 18:43:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:34.589 v24.09-pre-319-g0a6bb28fa 00:00:34.589 18:43:12 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:34.589 18:43:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:34.589 18:43:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:34.589 18:43:12 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:34.589 18:43:12 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:34.589 18:43:12 -- common/autotest_common.sh@10 -- $ set +x 00:00:34.589 ************************************ 00:00:34.589 START TEST ubsan 00:00:34.589 ************************************ 00:00:34.589 18:43:12 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:34.589 using ubsan 00:00:34.589 00:00:34.589 real 0m0.000s 00:00:34.589 user 0m0.000s 00:00:34.589 sys 0m0.000s 00:00:34.589 18:43:12 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:34.589 18:43:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:34.589 ************************************ 00:00:34.589 END TEST ubsan 00:00:34.589 ************************************ 00:00:34.589 18:43:12 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:34.589 18:43:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:34.589 18:43:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:34.589 18:43:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:34.589 18:43:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:34.589 18:43:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:34.589 18:43:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:34.589 18:43:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:34.589 18:43:12 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:34.847 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:34.847 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:35.106 Using 'verbs' RDMA provider 00:00:45.657 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:55.677 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:55.677 Creating mk/config.mk...done. 00:00:55.677 Creating mk/cc.flags.mk...done. 00:00:55.677 Type 'make' to build. 00:00:55.677 18:43:32 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:00:55.677 18:43:32 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:55.677 18:43:32 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:55.677 18:43:32 -- common/autotest_common.sh@10 -- $ set +x 00:00:55.677 ************************************ 00:00:55.677 START TEST make 00:00:55.677 ************************************ 00:00:55.677 18:43:32 make -- common/autotest_common.sh@1125 -- $ make -j48 00:00:55.677 make[1]: Nothing to be done for 'all'. 00:00:56.616 The Meson build system 00:00:56.616 Version: 1.3.1 00:00:56.616 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:00:56.616 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:56.616 Build type: native build 00:00:56.616 Project name: libvfio-user 00:00:56.616 Project version: 0.0.1 00:00:56.616 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:56.616 C linker for the host machine: cc ld.bfd 2.39-16 00:00:56.616 Host machine cpu family: x86_64 00:00:56.616 Host machine cpu: x86_64 00:00:56.616 Run-time dependency threads found: YES 00:00:56.616 Library dl found: YES 00:00:56.616 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:56.616 Run-time dependency json-c found: YES 0.17 00:00:56.616 Run-time dependency cmocka found: YES 1.1.7 00:00:56.616 Program pytest-3 found: NO 00:00:56.616 Program flake8 found: NO 00:00:56.616 Program misspell-fixer found: NO 00:00:56.616 Program restructuredtext-lint found: NO 00:00:56.616 Program valgrind found: YES (/usr/bin/valgrind) 00:00:56.616 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:56.616 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:56.616 Compiler for C supports arguments -Wwrite-strings: YES 00:00:56.616 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:56.616 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:00:56.616 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:00:56.616 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:56.616 Build targets in project: 8 00:00:56.616 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:00:56.616 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:00:56.616 00:00:56.616 libvfio-user 0.0.1 00:00:56.616 00:00:56.616 User defined options 00:00:56.616 buildtype : debug 00:00:56.616 default_library: shared 00:00:56.616 libdir : /usr/local/lib 00:00:56.616 00:00:56.616 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:57.569 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:57.569 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:00:57.569 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:00:57.569 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:00:57.569 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:00:57.569 [5/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:00:57.569 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:00:57.569 [7/37] Compiling C object samples/null.p/null.c.o 00:00:57.569 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:00:57.830 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:00:57.830 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:00:57.830 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:00:57.830 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:00:57.830 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:00:57.830 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:00:57.830 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:00:57.830 [16/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:00:57.830 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:00:57.830 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:00:57.830 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:00:57.830 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:00:57.830 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:00:57.830 [22/37] Compiling C object samples/server.p/server.c.o 00:00:57.830 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:00:57.830 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:00:57.830 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:00:57.830 [26/37] Compiling C object samples/client.p/client.c.o 00:00:58.091 [27/37] Linking target samples/client 00:00:58.091 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:00:58.091 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:00:58.091 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:00:58.091 [31/37] Linking target test/unit_tests 00:00:58.354 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:00:58.354 [33/37] Linking target samples/server 00:00:58.354 [34/37] Linking target samples/null 00:00:58.354 [35/37] Linking target samples/gpio-pci-idio-16 00:00:58.354 [36/37] Linking target samples/lspci 00:00:58.354 [37/37] Linking target samples/shadow_ioeventfd_server 00:00:58.354 INFO: autodetecting backend as ninja 00:00:58.354 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:58.354 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:59.295 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:59.295 ninja: no work to do. 00:01:03.476 The Meson build system 00:01:03.476 Version: 1.3.1 00:01:03.476 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:03.476 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:03.476 Build type: native build 00:01:03.476 Program cat found: YES (/usr/bin/cat) 00:01:03.476 Project name: DPDK 00:01:03.476 Project version: 24.03.0 00:01:03.476 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:03.476 C linker for the host machine: cc ld.bfd 2.39-16 00:01:03.476 Host machine cpu family: x86_64 00:01:03.476 Host machine cpu: x86_64 00:01:03.476 Message: ## Building in Developer Mode ## 00:01:03.476 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:03.476 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:03.476 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:03.476 Program python3 found: YES (/usr/bin/python3) 00:01:03.476 Program cat found: YES (/usr/bin/cat) 00:01:03.476 Compiler for C supports arguments -march=native: YES 00:01:03.476 Checking for size of "void *" : 8 00:01:03.476 Checking for size of "void *" : 8 (cached) 00:01:03.476 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:03.476 Library m found: YES 00:01:03.476 Library numa found: YES 00:01:03.476 Has header "numaif.h" : YES 00:01:03.476 Library fdt found: NO 00:01:03.476 Library execinfo found: NO 00:01:03.476 Has header "execinfo.h" : YES 00:01:03.476 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:03.476 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:03.476 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:03.476 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:03.476 Run-time dependency openssl found: YES 3.0.9 00:01:03.476 Run-time dependency libpcap found: YES 1.10.4 00:01:03.476 Has header "pcap.h" with dependency libpcap: YES 00:01:03.476 Compiler for C supports arguments -Wcast-qual: YES 00:01:03.476 Compiler for C supports arguments -Wdeprecated: YES 00:01:03.476 Compiler for C supports arguments -Wformat: YES 00:01:03.476 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:03.476 Compiler for C supports arguments -Wformat-security: NO 00:01:03.476 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:03.476 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:03.476 Compiler for C supports arguments -Wnested-externs: YES 00:01:03.476 Compiler for C supports arguments -Wold-style-definition: YES 00:01:03.476 Compiler for C supports arguments -Wpointer-arith: YES 00:01:03.476 Compiler for C supports arguments -Wsign-compare: YES 00:01:03.476 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:03.476 Compiler for C supports arguments -Wundef: YES 00:01:03.476 Compiler for C supports arguments -Wwrite-strings: YES 00:01:03.476 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:03.476 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:03.476 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:03.476 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:03.476 Program objdump found: YES (/usr/bin/objdump) 00:01:03.476 Compiler for C supports arguments -mavx512f: YES 00:01:03.476 Checking if "AVX512 checking" compiles: YES 00:01:03.476 Fetching value of define "__SSE4_2__" : 1 00:01:03.476 Fetching value of define "__AES__" : 1 00:01:03.476 Fetching value of define "__AVX__" : 1 00:01:03.476 Fetching value of define "__AVX2__" : (undefined) 00:01:03.476 Fetching value of define "__AVX512BW__" : (undefined) 00:01:03.476 Fetching value of define "__AVX512CD__" : (undefined) 00:01:03.476 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:03.476 Fetching value of define "__AVX512F__" : (undefined) 00:01:03.476 Fetching value of define "__AVX512VL__" : (undefined) 00:01:03.476 Fetching value of define "__PCLMUL__" : 1 00:01:03.476 Fetching value of define "__RDRND__" : 1 00:01:03.476 Fetching value of define "__RDSEED__" : (undefined) 00:01:03.476 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:03.476 Fetching value of define "__znver1__" : (undefined) 00:01:03.476 Fetching value of define "__znver2__" : (undefined) 00:01:03.476 Fetching value of define "__znver3__" : (undefined) 00:01:03.476 Fetching value of define "__znver4__" : (undefined) 00:01:03.476 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:03.476 Message: lib/log: Defining dependency "log" 00:01:03.476 Message: lib/kvargs: Defining dependency "kvargs" 00:01:03.476 Message: lib/telemetry: Defining dependency "telemetry" 00:01:03.476 Checking for function "getentropy" : NO 00:01:03.476 Message: lib/eal: Defining dependency "eal" 00:01:03.476 Message: lib/ring: Defining dependency "ring" 00:01:03.476 Message: lib/rcu: Defining dependency "rcu" 00:01:03.476 Message: lib/mempool: Defining dependency "mempool" 00:01:03.476 Message: lib/mbuf: Defining dependency "mbuf" 00:01:03.476 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:03.476 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:03.476 Compiler for C supports arguments -mpclmul: YES 00:01:03.476 Compiler for C supports arguments -maes: YES 00:01:03.476 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:03.476 Compiler for C supports arguments -mavx512bw: YES 00:01:03.476 Compiler for C supports arguments -mavx512dq: YES 00:01:03.476 Compiler for C supports arguments -mavx512vl: YES 00:01:03.476 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:03.476 Compiler for C supports arguments -mavx2: YES 00:01:03.476 Compiler for C supports arguments -mavx: YES 00:01:03.476 Message: lib/net: Defining dependency "net" 00:01:03.476 Message: lib/meter: Defining dependency "meter" 00:01:03.476 Message: lib/ethdev: Defining dependency "ethdev" 00:01:03.476 Message: lib/pci: Defining dependency "pci" 00:01:03.476 Message: lib/cmdline: Defining dependency "cmdline" 00:01:03.476 Message: lib/hash: Defining dependency "hash" 00:01:03.476 Message: lib/timer: Defining dependency "timer" 00:01:03.476 Message: lib/compressdev: Defining dependency "compressdev" 00:01:03.476 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:03.476 Message: lib/dmadev: Defining dependency "dmadev" 00:01:03.476 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:03.476 Message: lib/power: Defining dependency "power" 00:01:03.476 Message: lib/reorder: Defining dependency "reorder" 00:01:03.476 Message: lib/security: Defining dependency "security" 00:01:03.476 Has header "linux/userfaultfd.h" : YES 00:01:03.476 Has header "linux/vduse.h" : YES 00:01:03.476 Message: lib/vhost: Defining dependency "vhost" 00:01:03.476 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:03.476 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:03.476 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:03.476 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:03.476 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:03.476 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:03.476 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:03.476 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:03.476 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:03.476 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:03.476 Program doxygen found: YES (/usr/bin/doxygen) 00:01:03.476 Configuring doxy-api-html.conf using configuration 00:01:03.476 Configuring doxy-api-man.conf using configuration 00:01:03.476 Program mandb found: YES (/usr/bin/mandb) 00:01:03.476 Program sphinx-build found: NO 00:01:03.477 Configuring rte_build_config.h using configuration 00:01:03.477 Message: 00:01:03.477 ================= 00:01:03.477 Applications Enabled 00:01:03.477 ================= 00:01:03.477 00:01:03.477 apps: 00:01:03.477 00:01:03.477 00:01:03.477 Message: 00:01:03.477 ================= 00:01:03.477 Libraries Enabled 00:01:03.477 ================= 00:01:03.477 00:01:03.477 libs: 00:01:03.477 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:03.477 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:03.477 cryptodev, dmadev, power, reorder, security, vhost, 00:01:03.477 00:01:03.477 Message: 00:01:03.477 =============== 00:01:03.477 Drivers Enabled 00:01:03.477 =============== 00:01:03.477 00:01:03.477 common: 00:01:03.477 00:01:03.477 bus: 00:01:03.477 pci, vdev, 00:01:03.477 mempool: 00:01:03.477 ring, 00:01:03.477 dma: 00:01:03.477 00:01:03.477 net: 00:01:03.477 00:01:03.477 crypto: 00:01:03.477 00:01:03.477 compress: 00:01:03.477 00:01:03.477 vdpa: 00:01:03.477 00:01:03.477 00:01:03.477 Message: 00:01:03.477 ================= 00:01:03.477 Content Skipped 00:01:03.477 ================= 00:01:03.477 00:01:03.477 apps: 00:01:03.477 dumpcap: explicitly disabled via build config 00:01:03.477 graph: explicitly disabled via build config 00:01:03.477 pdump: explicitly disabled via build config 00:01:03.477 proc-info: explicitly disabled via build config 00:01:03.477 test-acl: explicitly disabled via build config 00:01:03.477 test-bbdev: explicitly disabled via build config 00:01:03.477 test-cmdline: explicitly disabled via build config 00:01:03.477 test-compress-perf: explicitly disabled via build config 00:01:03.477 test-crypto-perf: explicitly disabled via build config 00:01:03.477 test-dma-perf: explicitly disabled via build config 00:01:03.477 test-eventdev: explicitly disabled via build config 00:01:03.477 test-fib: explicitly disabled via build config 00:01:03.477 test-flow-perf: explicitly disabled via build config 00:01:03.477 test-gpudev: explicitly disabled via build config 00:01:03.477 test-mldev: explicitly disabled via build config 00:01:03.477 test-pipeline: explicitly disabled via build config 00:01:03.477 test-pmd: explicitly disabled via build config 00:01:03.477 test-regex: explicitly disabled via build config 00:01:03.477 test-sad: explicitly disabled via build config 00:01:03.477 test-security-perf: explicitly disabled via build config 00:01:03.477 00:01:03.477 libs: 00:01:03.477 argparse: explicitly disabled via build config 00:01:03.477 metrics: explicitly disabled via build config 00:01:03.477 acl: explicitly disabled via build config 00:01:03.477 bbdev: explicitly disabled via build config 00:01:03.477 bitratestats: explicitly disabled via build config 00:01:03.477 bpf: explicitly disabled via build config 00:01:03.477 cfgfile: explicitly disabled via build config 00:01:03.477 distributor: explicitly disabled via build config 00:01:03.477 efd: explicitly disabled via build config 00:01:03.477 eventdev: explicitly disabled via build config 00:01:03.477 dispatcher: explicitly disabled via build config 00:01:03.477 gpudev: explicitly disabled via build config 00:01:03.477 gro: explicitly disabled via build config 00:01:03.477 gso: explicitly disabled via build config 00:01:03.477 ip_frag: explicitly disabled via build config 00:01:03.477 jobstats: explicitly disabled via build config 00:01:03.477 latencystats: explicitly disabled via build config 00:01:03.477 lpm: explicitly disabled via build config 00:01:03.477 member: explicitly disabled via build config 00:01:03.477 pcapng: explicitly disabled via build config 00:01:03.477 rawdev: explicitly disabled via build config 00:01:03.477 regexdev: explicitly disabled via build config 00:01:03.477 mldev: explicitly disabled via build config 00:01:03.477 rib: explicitly disabled via build config 00:01:03.477 sched: explicitly disabled via build config 00:01:03.477 stack: explicitly disabled via build config 00:01:03.477 ipsec: explicitly disabled via build config 00:01:03.477 pdcp: explicitly disabled via build config 00:01:03.477 fib: explicitly disabled via build config 00:01:03.477 port: explicitly disabled via build config 00:01:03.477 pdump: explicitly disabled via build config 00:01:03.477 table: explicitly disabled via build config 00:01:03.477 pipeline: explicitly disabled via build config 00:01:03.477 graph: explicitly disabled via build config 00:01:03.477 node: explicitly disabled via build config 00:01:03.477 00:01:03.477 drivers: 00:01:03.477 common/cpt: not in enabled drivers build config 00:01:03.477 common/dpaax: not in enabled drivers build config 00:01:03.477 common/iavf: not in enabled drivers build config 00:01:03.477 common/idpf: not in enabled drivers build config 00:01:03.477 common/ionic: not in enabled drivers build config 00:01:03.477 common/mvep: not in enabled drivers build config 00:01:03.477 common/octeontx: not in enabled drivers build config 00:01:03.477 bus/auxiliary: not in enabled drivers build config 00:01:03.477 bus/cdx: not in enabled drivers build config 00:01:03.477 bus/dpaa: not in enabled drivers build config 00:01:03.477 bus/fslmc: not in enabled drivers build config 00:01:03.477 bus/ifpga: not in enabled drivers build config 00:01:03.477 bus/platform: not in enabled drivers build config 00:01:03.477 bus/uacce: not in enabled drivers build config 00:01:03.477 bus/vmbus: not in enabled drivers build config 00:01:03.477 common/cnxk: not in enabled drivers build config 00:01:03.477 common/mlx5: not in enabled drivers build config 00:01:03.477 common/nfp: not in enabled drivers build config 00:01:03.477 common/nitrox: not in enabled drivers build config 00:01:03.477 common/qat: not in enabled drivers build config 00:01:03.477 common/sfc_efx: not in enabled drivers build config 00:01:03.477 mempool/bucket: not in enabled drivers build config 00:01:03.477 mempool/cnxk: not in enabled drivers build config 00:01:03.477 mempool/dpaa: not in enabled drivers build config 00:01:03.477 mempool/dpaa2: not in enabled drivers build config 00:01:03.477 mempool/octeontx: not in enabled drivers build config 00:01:03.477 mempool/stack: not in enabled drivers build config 00:01:03.477 dma/cnxk: not in enabled drivers build config 00:01:03.477 dma/dpaa: not in enabled drivers build config 00:01:03.477 dma/dpaa2: not in enabled drivers build config 00:01:03.477 dma/hisilicon: not in enabled drivers build config 00:01:03.477 dma/idxd: not in enabled drivers build config 00:01:03.477 dma/ioat: not in enabled drivers build config 00:01:03.477 dma/skeleton: not in enabled drivers build config 00:01:03.477 net/af_packet: not in enabled drivers build config 00:01:03.477 net/af_xdp: not in enabled drivers build config 00:01:03.477 net/ark: not in enabled drivers build config 00:01:03.477 net/atlantic: not in enabled drivers build config 00:01:03.477 net/avp: not in enabled drivers build config 00:01:03.477 net/axgbe: not in enabled drivers build config 00:01:03.477 net/bnx2x: not in enabled drivers build config 00:01:03.477 net/bnxt: not in enabled drivers build config 00:01:03.477 net/bonding: not in enabled drivers build config 00:01:03.477 net/cnxk: not in enabled drivers build config 00:01:03.477 net/cpfl: not in enabled drivers build config 00:01:03.477 net/cxgbe: not in enabled drivers build config 00:01:03.477 net/dpaa: not in enabled drivers build config 00:01:03.477 net/dpaa2: not in enabled drivers build config 00:01:03.477 net/e1000: not in enabled drivers build config 00:01:03.477 net/ena: not in enabled drivers build config 00:01:03.477 net/enetc: not in enabled drivers build config 00:01:03.477 net/enetfec: not in enabled drivers build config 00:01:03.477 net/enic: not in enabled drivers build config 00:01:03.477 net/failsafe: not in enabled drivers build config 00:01:03.477 net/fm10k: not in enabled drivers build config 00:01:03.477 net/gve: not in enabled drivers build config 00:01:03.477 net/hinic: not in enabled drivers build config 00:01:03.477 net/hns3: not in enabled drivers build config 00:01:03.477 net/i40e: not in enabled drivers build config 00:01:03.477 net/iavf: not in enabled drivers build config 00:01:03.477 net/ice: not in enabled drivers build config 00:01:03.477 net/idpf: not in enabled drivers build config 00:01:03.477 net/igc: not in enabled drivers build config 00:01:03.477 net/ionic: not in enabled drivers build config 00:01:03.477 net/ipn3ke: not in enabled drivers build config 00:01:03.477 net/ixgbe: not in enabled drivers build config 00:01:03.477 net/mana: not in enabled drivers build config 00:01:03.477 net/memif: not in enabled drivers build config 00:01:03.477 net/mlx4: not in enabled drivers build config 00:01:03.477 net/mlx5: not in enabled drivers build config 00:01:03.477 net/mvneta: not in enabled drivers build config 00:01:03.477 net/mvpp2: not in enabled drivers build config 00:01:03.477 net/netvsc: not in enabled drivers build config 00:01:03.477 net/nfb: not in enabled drivers build config 00:01:03.477 net/nfp: not in enabled drivers build config 00:01:03.477 net/ngbe: not in enabled drivers build config 00:01:03.477 net/null: not in enabled drivers build config 00:01:03.477 net/octeontx: not in enabled drivers build config 00:01:03.477 net/octeon_ep: not in enabled drivers build config 00:01:03.477 net/pcap: not in enabled drivers build config 00:01:03.477 net/pfe: not in enabled drivers build config 00:01:03.477 net/qede: not in enabled drivers build config 00:01:03.477 net/ring: not in enabled drivers build config 00:01:03.477 net/sfc: not in enabled drivers build config 00:01:03.477 net/softnic: not in enabled drivers build config 00:01:03.477 net/tap: not in enabled drivers build config 00:01:03.477 net/thunderx: not in enabled drivers build config 00:01:03.477 net/txgbe: not in enabled drivers build config 00:01:03.477 net/vdev_netvsc: not in enabled drivers build config 00:01:03.477 net/vhost: not in enabled drivers build config 00:01:03.477 net/virtio: not in enabled drivers build config 00:01:03.478 net/vmxnet3: not in enabled drivers build config 00:01:03.478 raw/*: missing internal dependency, "rawdev" 00:01:03.478 crypto/armv8: not in enabled drivers build config 00:01:03.478 crypto/bcmfs: not in enabled drivers build config 00:01:03.478 crypto/caam_jr: not in enabled drivers build config 00:01:03.478 crypto/ccp: not in enabled drivers build config 00:01:03.478 crypto/cnxk: not in enabled drivers build config 00:01:03.478 crypto/dpaa_sec: not in enabled drivers build config 00:01:03.478 crypto/dpaa2_sec: not in enabled drivers build config 00:01:03.478 crypto/ipsec_mb: not in enabled drivers build config 00:01:03.478 crypto/mlx5: not in enabled drivers build config 00:01:03.478 crypto/mvsam: not in enabled drivers build config 00:01:03.478 crypto/nitrox: not in enabled drivers build config 00:01:03.478 crypto/null: not in enabled drivers build config 00:01:03.478 crypto/octeontx: not in enabled drivers build config 00:01:03.478 crypto/openssl: not in enabled drivers build config 00:01:03.478 crypto/scheduler: not in enabled drivers build config 00:01:03.478 crypto/uadk: not in enabled drivers build config 00:01:03.478 crypto/virtio: not in enabled drivers build config 00:01:03.478 compress/isal: not in enabled drivers build config 00:01:03.478 compress/mlx5: not in enabled drivers build config 00:01:03.478 compress/nitrox: not in enabled drivers build config 00:01:03.478 compress/octeontx: not in enabled drivers build config 00:01:03.478 compress/zlib: not in enabled drivers build config 00:01:03.478 regex/*: missing internal dependency, "regexdev" 00:01:03.478 ml/*: missing internal dependency, "mldev" 00:01:03.478 vdpa/ifc: not in enabled drivers build config 00:01:03.478 vdpa/mlx5: not in enabled drivers build config 00:01:03.478 vdpa/nfp: not in enabled drivers build config 00:01:03.478 vdpa/sfc: not in enabled drivers build config 00:01:03.478 event/*: missing internal dependency, "eventdev" 00:01:03.478 baseband/*: missing internal dependency, "bbdev" 00:01:03.478 gpu/*: missing internal dependency, "gpudev" 00:01:03.478 00:01:03.478 00:01:03.734 Build targets in project: 85 00:01:03.734 00:01:03.734 DPDK 24.03.0 00:01:03.734 00:01:03.734 User defined options 00:01:03.734 buildtype : debug 00:01:03.734 default_library : shared 00:01:03.734 libdir : lib 00:01:03.734 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:03.734 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:03.734 c_link_args : 00:01:03.734 cpu_instruction_set: native 00:01:03.734 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:03.735 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:03.735 enable_docs : false 00:01:03.735 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:03.735 enable_kmods : false 00:01:03.735 max_lcores : 128 00:01:03.735 tests : false 00:01:03.735 00:01:03.735 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:04.306 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:04.306 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:04.306 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:04.306 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:04.306 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:04.306 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:04.306 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:04.306 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:04.306 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:04.306 [9/268] Linking static target lib/librte_kvargs.a 00:01:04.306 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:04.306 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:04.566 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:04.566 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:04.566 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:04.566 [15/268] Linking static target lib/librte_log.a 00:01:04.566 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:05.140 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.140 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:05.140 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:05.140 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:05.140 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:05.140 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:05.140 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:05.140 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:05.140 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:05.140 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:05.140 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:05.140 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:05.140 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:05.403 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:05.403 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:05.403 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:05.403 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:05.403 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:05.403 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:05.403 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:05.403 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:05.403 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:05.403 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:05.403 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:05.403 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:05.403 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:05.403 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:05.403 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:05.403 [45/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:05.403 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:05.403 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:05.403 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:05.403 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:05.403 [50/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:05.403 [51/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:05.403 [52/268] Linking static target lib/librte_telemetry.a 00:01:05.403 [53/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:05.403 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:05.403 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:05.403 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:05.403 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:05.403 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:05.403 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:05.403 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:05.403 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:05.403 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:05.661 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:05.661 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:05.661 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:05.661 [66/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.661 [67/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:05.924 [68/268] Linking static target lib/librte_pci.a 00:01:05.924 [69/268] Linking target lib/librte_log.so.24.1 00:01:05.924 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:05.924 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:06.188 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:06.188 [73/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:06.188 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:06.188 [75/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:06.188 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:06.188 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:06.188 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:06.188 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:06.189 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:06.189 [81/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:06.189 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:06.189 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:06.189 [84/268] Linking target lib/librte_kvargs.so.24.1 00:01:06.189 [85/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:06.189 [86/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:06.189 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:06.189 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:06.189 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:06.189 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:06.189 [91/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:06.189 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:06.189 [93/268] Linking static target lib/librte_meter.a 00:01:06.449 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:06.449 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:06.449 [96/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:06.449 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:06.449 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:06.449 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:06.449 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:06.449 [101/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.449 [102/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:06.449 [103/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.449 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:06.449 [105/268] Linking static target lib/librte_ring.a 00:01:06.449 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:06.449 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:06.449 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:06.449 [109/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:06.449 [110/268] Linking target lib/librte_telemetry.so.24.1 00:01:06.449 [111/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:06.449 [112/268] Linking static target lib/librte_rcu.a 00:01:06.449 [113/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:06.449 [114/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:06.449 [115/268] Linking static target lib/librte_mempool.a 00:01:06.449 [116/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:06.449 [117/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:06.449 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:06.449 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:06.449 [120/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:06.449 [121/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:06.449 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:06.712 [123/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:06.712 [124/268] Linking static target lib/librte_eal.a 00:01:06.712 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:06.712 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:06.712 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:06.712 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:06.712 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:06.712 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:06.712 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:06.712 [132/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:06.712 [133/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:06.712 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:06.971 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:06.971 [136/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:06.971 [137/268] Linking static target lib/librte_net.a 00:01:06.971 [138/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.971 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:06.971 [140/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:06.971 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:06.971 [142/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.233 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:07.233 [144/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.233 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:07.233 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:07.233 [147/268] Linking static target lib/librte_cmdline.a 00:01:07.233 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:07.233 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:07.233 [150/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:07.233 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:07.233 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:07.233 [153/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.233 [154/268] Linking static target lib/librte_timer.a 00:01:07.493 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:07.493 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:07.493 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:07.493 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:07.493 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:07.493 [160/268] Linking static target lib/librte_dmadev.a 00:01:07.493 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:07.493 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:07.752 [163/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:07.752 [164/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.752 [165/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:07.752 [166/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:07.752 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:07.752 [168/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:07.752 [169/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:07.752 [170/268] Linking static target lib/librte_compressdev.a 00:01:07.752 [171/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:07.752 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:07.752 [173/268] Linking static target lib/librte_power.a 00:01:07.752 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:07.752 [175/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.752 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:07.752 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:07.752 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:08.010 [179/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:08.010 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:08.010 [181/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:08.010 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:08.010 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:08.010 [184/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:08.010 [185/268] Linking static target lib/librte_hash.a 00:01:08.010 [186/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.010 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:08.010 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:08.010 [189/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:08.010 [190/268] Linking static target lib/librte_mbuf.a 00:01:08.010 [191/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:08.010 [192/268] Linking static target lib/librte_reorder.a 00:01:08.010 [193/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.010 [194/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:08.010 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:08.268 [196/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:08.268 [197/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.268 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:08.269 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:08.269 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:08.269 [201/268] Linking static target drivers/librte_bus_vdev.a 00:01:08.269 [202/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:08.269 [203/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:08.269 [204/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:08.269 [205/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.269 [206/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:08.269 [207/268] Linking static target lib/librte_security.a 00:01:08.269 [208/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.269 [209/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:08.269 [210/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:08.527 [211/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:08.527 [212/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:08.527 [213/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:08.527 [214/268] Linking static target drivers/librte_mempool_ring.a 00:01:08.527 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.527 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:08.527 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:08.527 [218/268] Linking static target drivers/librte_bus_pci.a 00:01:08.527 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.527 [220/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.527 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:08.527 [222/268] Linking static target lib/librte_ethdev.a 00:01:08.527 [223/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:08.527 [224/268] Linking static target lib/librte_cryptodev.a 00:01:08.786 [225/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.786 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.719 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.094 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:12.993 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.993 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.993 [231/268] Linking target lib/librte_eal.so.24.1 00:01:12.993 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:12.993 [233/268] Linking target lib/librte_ring.so.24.1 00:01:12.993 [234/268] Linking target lib/librte_meter.so.24.1 00:01:12.993 [235/268] Linking target lib/librte_pci.so.24.1 00:01:12.993 [236/268] Linking target lib/librte_timer.so.24.1 00:01:12.993 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:12.993 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:12.993 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:12.993 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:12.993 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:12.993 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:12.993 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:13.251 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:13.251 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:13.251 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:13.251 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:13.251 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:13.251 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:13.251 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:13.510 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:13.510 [252/268] Linking target lib/librte_compressdev.so.24.1 00:01:13.510 [253/268] Linking target lib/librte_reorder.so.24.1 00:01:13.510 [254/268] Linking target lib/librte_net.so.24.1 00:01:13.510 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:13.510 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:13.510 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:13.769 [258/268] Linking target lib/librte_hash.so.24.1 00:01:13.769 [259/268] Linking target lib/librte_cmdline.so.24.1 00:01:13.769 [260/268] Linking target lib/librte_security.so.24.1 00:01:13.769 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:13.769 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:13.769 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:13.769 [264/268] Linking target lib/librte_power.so.24.1 00:01:16.315 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:16.315 [266/268] Linking static target lib/librte_vhost.a 00:01:17.274 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.274 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:17.274 INFO: autodetecting backend as ninja 00:01:17.274 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:18.209 CC lib/ut/ut.o 00:01:18.209 CC lib/ut_mock/mock.o 00:01:18.209 CC lib/log/log.o 00:01:18.209 CC lib/log/log_flags.o 00:01:18.209 CC lib/log/log_deprecated.o 00:01:18.467 LIB libspdk_log.a 00:01:18.467 LIB libspdk_ut.a 00:01:18.467 LIB libspdk_ut_mock.a 00:01:18.467 SO libspdk_ut.so.2.0 00:01:18.467 SO libspdk_ut_mock.so.6.0 00:01:18.467 SO libspdk_log.so.7.0 00:01:18.467 SYMLINK libspdk_ut_mock.so 00:01:18.467 SYMLINK libspdk_ut.so 00:01:18.467 SYMLINK libspdk_log.so 00:01:18.725 CC lib/ioat/ioat.o 00:01:18.725 CXX lib/trace_parser/trace.o 00:01:18.725 CC lib/dma/dma.o 00:01:18.725 CC lib/util/base64.o 00:01:18.725 CC lib/util/bit_array.o 00:01:18.725 CC lib/util/cpuset.o 00:01:18.725 CC lib/util/crc16.o 00:01:18.725 CC lib/util/crc32.o 00:01:18.725 CC lib/util/crc32c.o 00:01:18.725 CC lib/util/crc32_ieee.o 00:01:18.725 CC lib/util/crc64.o 00:01:18.725 CC lib/util/dif.o 00:01:18.725 CC lib/util/fd.o 00:01:18.725 CC lib/util/fd_group.o 00:01:18.725 CC lib/util/file.o 00:01:18.725 CC lib/util/hexlify.o 00:01:18.725 CC lib/util/iov.o 00:01:18.725 CC lib/util/math.o 00:01:18.725 CC lib/util/net.o 00:01:18.725 CC lib/util/pipe.o 00:01:18.725 CC lib/util/strerror_tls.o 00:01:18.725 CC lib/util/string.o 00:01:18.725 CC lib/util/uuid.o 00:01:18.725 CC lib/util/xor.o 00:01:18.725 CC lib/util/zipf.o 00:01:18.725 CC lib/vfio_user/host/vfio_user_pci.o 00:01:18.725 CC lib/vfio_user/host/vfio_user.o 00:01:18.725 LIB libspdk_dma.a 00:01:18.983 SO libspdk_dma.so.4.0 00:01:18.983 SYMLINK libspdk_dma.so 00:01:18.983 LIB libspdk_ioat.a 00:01:18.983 SO libspdk_ioat.so.7.0 00:01:18.983 SYMLINK libspdk_ioat.so 00:01:18.983 LIB libspdk_vfio_user.a 00:01:18.983 SO libspdk_vfio_user.so.5.0 00:01:18.983 SYMLINK libspdk_vfio_user.so 00:01:19.241 LIB libspdk_util.a 00:01:19.241 SO libspdk_util.so.10.0 00:01:19.499 SYMLINK libspdk_util.so 00:01:19.499 CC lib/vmd/vmd.o 00:01:19.499 CC lib/env_dpdk/env.o 00:01:19.499 CC lib/rdma_provider/common.o 00:01:19.499 CC lib/json/json_parse.o 00:01:19.499 CC lib/idxd/idxd.o 00:01:19.499 CC lib/vmd/led.o 00:01:19.499 CC lib/env_dpdk/memory.o 00:01:19.499 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:19.499 CC lib/idxd/idxd_user.o 00:01:19.499 CC lib/json/json_util.o 00:01:19.499 CC lib/env_dpdk/pci.o 00:01:19.499 CC lib/rdma_utils/rdma_utils.o 00:01:19.499 CC lib/json/json_write.o 00:01:19.499 CC lib/idxd/idxd_kernel.o 00:01:19.499 CC lib/env_dpdk/init.o 00:01:19.499 CC lib/env_dpdk/threads.o 00:01:19.499 CC lib/conf/conf.o 00:01:19.499 CC lib/env_dpdk/pci_ioat.o 00:01:19.499 CC lib/env_dpdk/pci_virtio.o 00:01:19.499 CC lib/env_dpdk/pci_vmd.o 00:01:19.499 CC lib/env_dpdk/pci_idxd.o 00:01:19.499 CC lib/env_dpdk/pci_event.o 00:01:19.499 CC lib/env_dpdk/sigbus_handler.o 00:01:19.499 CC lib/env_dpdk/pci_dpdk.o 00:01:19.499 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:19.499 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:19.757 LIB libspdk_trace_parser.a 00:01:19.757 SO libspdk_trace_parser.so.5.0 00:01:19.757 LIB libspdk_rdma_provider.a 00:01:19.757 SO libspdk_rdma_provider.so.6.0 00:01:19.757 SYMLINK libspdk_trace_parser.so 00:01:19.757 LIB libspdk_conf.a 00:01:19.757 SO libspdk_conf.so.6.0 00:01:19.757 LIB libspdk_rdma_utils.a 00:01:19.757 SYMLINK libspdk_rdma_provider.so 00:01:19.757 SO libspdk_rdma_utils.so.1.0 00:01:19.757 SYMLINK libspdk_conf.so 00:01:20.016 SYMLINK libspdk_rdma_utils.so 00:01:20.016 LIB libspdk_json.a 00:01:20.016 SO libspdk_json.so.6.0 00:01:20.016 SYMLINK libspdk_json.so 00:01:20.016 LIB libspdk_idxd.a 00:01:20.274 SO libspdk_idxd.so.12.0 00:01:20.274 CC lib/jsonrpc/jsonrpc_server.o 00:01:20.274 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:20.274 CC lib/jsonrpc/jsonrpc_client.o 00:01:20.274 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:20.274 SYMLINK libspdk_idxd.so 00:01:20.274 LIB libspdk_vmd.a 00:01:20.274 SO libspdk_vmd.so.6.0 00:01:20.274 SYMLINK libspdk_vmd.so 00:01:20.532 LIB libspdk_jsonrpc.a 00:01:20.532 SO libspdk_jsonrpc.so.6.0 00:01:20.532 SYMLINK libspdk_jsonrpc.so 00:01:20.790 CC lib/rpc/rpc.o 00:01:21.048 LIB libspdk_rpc.a 00:01:21.048 SO libspdk_rpc.so.6.0 00:01:21.048 SYMLINK libspdk_rpc.so 00:01:21.306 CC lib/notify/notify.o 00:01:21.306 CC lib/trace/trace.o 00:01:21.306 CC lib/trace/trace_flags.o 00:01:21.306 CC lib/notify/notify_rpc.o 00:01:21.306 CC lib/trace/trace_rpc.o 00:01:21.306 CC lib/keyring/keyring.o 00:01:21.306 CC lib/keyring/keyring_rpc.o 00:01:21.306 LIB libspdk_notify.a 00:01:21.306 SO libspdk_notify.so.6.0 00:01:21.306 LIB libspdk_keyring.a 00:01:21.306 SYMLINK libspdk_notify.so 00:01:21.565 LIB libspdk_trace.a 00:01:21.565 SO libspdk_keyring.so.1.0 00:01:21.565 SO libspdk_trace.so.10.0 00:01:21.566 SYMLINK libspdk_keyring.so 00:01:21.566 SYMLINK libspdk_trace.so 00:01:21.824 CC lib/sock/sock.o 00:01:21.824 CC lib/sock/sock_rpc.o 00:01:21.824 CC lib/thread/thread.o 00:01:21.824 CC lib/thread/iobuf.o 00:01:21.824 LIB libspdk_env_dpdk.a 00:01:21.824 SO libspdk_env_dpdk.so.15.0 00:01:21.824 SYMLINK libspdk_env_dpdk.so 00:01:22.082 LIB libspdk_sock.a 00:01:22.082 SO libspdk_sock.so.10.0 00:01:22.082 SYMLINK libspdk_sock.so 00:01:22.341 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:22.341 CC lib/nvme/nvme_ctrlr.o 00:01:22.341 CC lib/nvme/nvme_fabric.o 00:01:22.341 CC lib/nvme/nvme_ns_cmd.o 00:01:22.341 CC lib/nvme/nvme_ns.o 00:01:22.341 CC lib/nvme/nvme_pcie_common.o 00:01:22.341 CC lib/nvme/nvme_pcie.o 00:01:22.341 CC lib/nvme/nvme_qpair.o 00:01:22.341 CC lib/nvme/nvme.o 00:01:22.341 CC lib/nvme/nvme_quirks.o 00:01:22.341 CC lib/nvme/nvme_transport.o 00:01:22.341 CC lib/nvme/nvme_discovery.o 00:01:22.341 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:22.341 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:22.341 CC lib/nvme/nvme_tcp.o 00:01:22.341 CC lib/nvme/nvme_opal.o 00:01:22.341 CC lib/nvme/nvme_io_msg.o 00:01:22.341 CC lib/nvme/nvme_poll_group.o 00:01:22.341 CC lib/nvme/nvme_zns.o 00:01:22.341 CC lib/nvme/nvme_stubs.o 00:01:22.341 CC lib/nvme/nvme_auth.o 00:01:22.341 CC lib/nvme/nvme_cuse.o 00:01:22.341 CC lib/nvme/nvme_vfio_user.o 00:01:22.341 CC lib/nvme/nvme_rdma.o 00:01:23.275 LIB libspdk_thread.a 00:01:23.275 SO libspdk_thread.so.10.1 00:01:23.533 SYMLINK libspdk_thread.so 00:01:23.533 CC lib/accel/accel.o 00:01:23.533 CC lib/init/json_config.o 00:01:23.533 CC lib/blob/blobstore.o 00:01:23.533 CC lib/vfu_tgt/tgt_endpoint.o 00:01:23.533 CC lib/init/subsystem.o 00:01:23.533 CC lib/accel/accel_rpc.o 00:01:23.533 CC lib/blob/request.o 00:01:23.533 CC lib/accel/accel_sw.o 00:01:23.533 CC lib/virtio/virtio.o 00:01:23.533 CC lib/blob/zeroes.o 00:01:23.533 CC lib/init/subsystem_rpc.o 00:01:23.533 CC lib/init/rpc.o 00:01:23.533 CC lib/virtio/virtio_vhost_user.o 00:01:23.533 CC lib/blob/blob_bs_dev.o 00:01:23.533 CC lib/vfu_tgt/tgt_rpc.o 00:01:23.533 CC lib/virtio/virtio_vfio_user.o 00:01:23.533 CC lib/virtio/virtio_pci.o 00:01:23.791 LIB libspdk_init.a 00:01:23.791 SO libspdk_init.so.5.0 00:01:24.049 LIB libspdk_virtio.a 00:01:24.049 LIB libspdk_vfu_tgt.a 00:01:24.049 SYMLINK libspdk_init.so 00:01:24.049 SO libspdk_virtio.so.7.0 00:01:24.049 SO libspdk_vfu_tgt.so.3.0 00:01:24.049 SYMLINK libspdk_vfu_tgt.so 00:01:24.049 SYMLINK libspdk_virtio.so 00:01:24.049 CC lib/event/app.o 00:01:24.049 CC lib/event/reactor.o 00:01:24.049 CC lib/event/log_rpc.o 00:01:24.049 CC lib/event/app_rpc.o 00:01:24.049 CC lib/event/scheduler_static.o 00:01:24.615 LIB libspdk_event.a 00:01:24.615 SO libspdk_event.so.14.0 00:01:24.615 LIB libspdk_accel.a 00:01:24.615 SYMLINK libspdk_event.so 00:01:24.615 SO libspdk_accel.so.16.0 00:01:24.615 LIB libspdk_nvme.a 00:01:24.873 SYMLINK libspdk_accel.so 00:01:24.873 SO libspdk_nvme.so.13.1 00:01:24.873 CC lib/bdev/bdev.o 00:01:24.873 CC lib/bdev/bdev_rpc.o 00:01:24.873 CC lib/bdev/bdev_zone.o 00:01:24.873 CC lib/bdev/part.o 00:01:24.873 CC lib/bdev/scsi_nvme.o 00:01:25.131 SYMLINK libspdk_nvme.so 00:01:27.029 LIB libspdk_blob.a 00:01:27.029 SO libspdk_blob.so.11.0 00:01:27.029 SYMLINK libspdk_blob.so 00:01:27.029 CC lib/blobfs/blobfs.o 00:01:27.029 CC lib/blobfs/tree.o 00:01:27.029 CC lib/lvol/lvol.o 00:01:27.596 LIB libspdk_bdev.a 00:01:27.596 SO libspdk_bdev.so.16.0 00:01:27.596 SYMLINK libspdk_bdev.so 00:01:27.596 LIB libspdk_blobfs.a 00:01:27.864 SO libspdk_blobfs.so.10.0 00:01:27.864 SYMLINK libspdk_blobfs.so 00:01:27.864 LIB libspdk_lvol.a 00:01:27.864 CC lib/nbd/nbd.o 00:01:27.864 CC lib/ublk/ublk.o 00:01:27.864 CC lib/scsi/dev.o 00:01:27.864 CC lib/ftl/ftl_core.o 00:01:27.864 CC lib/nbd/nbd_rpc.o 00:01:27.864 CC lib/ublk/ublk_rpc.o 00:01:27.864 CC lib/ftl/ftl_init.o 00:01:27.864 CC lib/scsi/lun.o 00:01:27.864 CC lib/nvmf/ctrlr.o 00:01:27.864 CC lib/scsi/port.o 00:01:27.864 CC lib/ftl/ftl_layout.o 00:01:27.864 CC lib/nvmf/ctrlr_discovery.o 00:01:27.864 CC lib/scsi/scsi.o 00:01:27.864 CC lib/ftl/ftl_debug.o 00:01:27.864 CC lib/scsi/scsi_bdev.o 00:01:27.864 CC lib/nvmf/ctrlr_bdev.o 00:01:27.864 CC lib/ftl/ftl_io.o 00:01:27.864 CC lib/nvmf/subsystem.o 00:01:27.864 CC lib/ftl/ftl_sb.o 00:01:27.864 CC lib/scsi/scsi_pr.o 00:01:27.864 CC lib/nvmf/nvmf.o 00:01:27.864 CC lib/scsi/scsi_rpc.o 00:01:27.864 CC lib/ftl/ftl_l2p.o 00:01:27.864 CC lib/nvmf/nvmf_rpc.o 00:01:27.864 CC lib/nvmf/transport.o 00:01:27.864 CC lib/ftl/ftl_l2p_flat.o 00:01:27.864 CC lib/scsi/task.o 00:01:27.864 CC lib/nvmf/tcp.o 00:01:27.864 CC lib/nvmf/stubs.o 00:01:27.864 CC lib/ftl/ftl_nv_cache.o 00:01:27.864 CC lib/ftl/ftl_band.o 00:01:27.864 CC lib/nvmf/mdns_server.o 00:01:27.864 CC lib/ftl/ftl_band_ops.o 00:01:27.864 CC lib/ftl/ftl_writer.o 00:01:27.864 CC lib/nvmf/vfio_user.o 00:01:27.864 CC lib/nvmf/rdma.o 00:01:27.864 CC lib/ftl/ftl_rq.o 00:01:27.864 CC lib/ftl/ftl_reloc.o 00:01:27.864 CC lib/nvmf/auth.o 00:01:27.864 CC lib/ftl/ftl_l2p_cache.o 00:01:27.864 CC lib/ftl/ftl_p2l.o 00:01:27.864 CC lib/ftl/mngt/ftl_mngt.o 00:01:27.864 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:27.864 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:27.864 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:27.864 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:27.864 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:27.864 SO libspdk_lvol.so.10.0 00:01:28.122 SYMLINK libspdk_lvol.so 00:01:28.122 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:28.122 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:28.122 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:28.122 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:28.122 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:28.122 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:28.122 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:28.382 CC lib/ftl/utils/ftl_conf.o 00:01:28.382 CC lib/ftl/utils/ftl_md.o 00:01:28.382 CC lib/ftl/utils/ftl_mempool.o 00:01:28.382 CC lib/ftl/utils/ftl_bitmap.o 00:01:28.382 CC lib/ftl/utils/ftl_property.o 00:01:28.382 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:28.382 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:28.382 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:28.382 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:28.382 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:28.382 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:28.382 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:28.382 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:28.382 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:28.641 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:28.641 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:28.641 CC lib/ftl/base/ftl_base_dev.o 00:01:28.641 CC lib/ftl/base/ftl_base_bdev.o 00:01:28.641 CC lib/ftl/ftl_trace.o 00:01:28.641 LIB libspdk_nbd.a 00:01:28.641 SO libspdk_nbd.so.7.0 00:01:28.898 LIB libspdk_scsi.a 00:01:28.898 SYMLINK libspdk_nbd.so 00:01:28.898 SO libspdk_scsi.so.9.0 00:01:28.898 LIB libspdk_ublk.a 00:01:28.898 SO libspdk_ublk.so.3.0 00:01:28.898 SYMLINK libspdk_scsi.so 00:01:28.898 SYMLINK libspdk_ublk.so 00:01:29.156 CC lib/iscsi/conn.o 00:01:29.156 CC lib/vhost/vhost.o 00:01:29.156 CC lib/iscsi/init_grp.o 00:01:29.157 CC lib/vhost/vhost_rpc.o 00:01:29.157 CC lib/vhost/vhost_scsi.o 00:01:29.157 CC lib/iscsi/iscsi.o 00:01:29.157 CC lib/iscsi/param.o 00:01:29.157 CC lib/iscsi/md5.o 00:01:29.157 CC lib/vhost/rte_vhost_user.o 00:01:29.157 CC lib/vhost/vhost_blk.o 00:01:29.157 CC lib/iscsi/portal_grp.o 00:01:29.157 CC lib/iscsi/tgt_node.o 00:01:29.157 CC lib/iscsi/iscsi_subsystem.o 00:01:29.157 CC lib/iscsi/iscsi_rpc.o 00:01:29.157 CC lib/iscsi/task.o 00:01:29.157 LIB libspdk_ftl.a 00:01:29.415 SO libspdk_ftl.so.9.0 00:01:29.673 SYMLINK libspdk_ftl.so 00:01:30.238 LIB libspdk_vhost.a 00:01:30.497 SO libspdk_vhost.so.8.0 00:01:30.497 SYMLINK libspdk_vhost.so 00:01:30.497 LIB libspdk_nvmf.a 00:01:30.497 LIB libspdk_iscsi.a 00:01:30.497 SO libspdk_nvmf.so.19.0 00:01:30.497 SO libspdk_iscsi.so.8.0 00:01:30.755 SYMLINK libspdk_iscsi.so 00:01:30.755 SYMLINK libspdk_nvmf.so 00:01:31.013 CC module/env_dpdk/env_dpdk_rpc.o 00:01:31.013 CC module/vfu_device/vfu_virtio.o 00:01:31.013 CC module/vfu_device/vfu_virtio_blk.o 00:01:31.013 CC module/vfu_device/vfu_virtio_scsi.o 00:01:31.013 CC module/vfu_device/vfu_virtio_rpc.o 00:01:31.013 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:31.013 CC module/accel/error/accel_error.o 00:01:31.013 CC module/accel/error/accel_error_rpc.o 00:01:31.013 CC module/keyring/file/keyring.o 00:01:31.013 CC module/accel/ioat/accel_ioat.o 00:01:31.013 CC module/blob/bdev/blob_bdev.o 00:01:31.013 CC module/keyring/file/keyring_rpc.o 00:01:31.013 CC module/accel/iaa/accel_iaa.o 00:01:31.013 CC module/scheduler/gscheduler/gscheduler.o 00:01:31.013 CC module/accel/ioat/accel_ioat_rpc.o 00:01:31.013 CC module/accel/iaa/accel_iaa_rpc.o 00:01:31.013 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:31.013 CC module/keyring/linux/keyring.o 00:01:31.013 CC module/accel/dsa/accel_dsa.o 00:01:31.013 CC module/sock/posix/posix.o 00:01:31.013 CC module/keyring/linux/keyring_rpc.o 00:01:31.013 CC module/accel/dsa/accel_dsa_rpc.o 00:01:31.272 LIB libspdk_env_dpdk_rpc.a 00:01:31.272 SO libspdk_env_dpdk_rpc.so.6.0 00:01:31.272 SYMLINK libspdk_env_dpdk_rpc.so 00:01:31.272 LIB libspdk_keyring_file.a 00:01:31.272 LIB libspdk_keyring_linux.a 00:01:31.272 LIB libspdk_scheduler_gscheduler.a 00:01:31.272 LIB libspdk_scheduler_dpdk_governor.a 00:01:31.272 SO libspdk_keyring_file.so.1.0 00:01:31.272 SO libspdk_keyring_linux.so.1.0 00:01:31.272 LIB libspdk_accel_error.a 00:01:31.272 SO libspdk_scheduler_gscheduler.so.4.0 00:01:31.272 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:31.272 LIB libspdk_accel_ioat.a 00:01:31.272 LIB libspdk_scheduler_dynamic.a 00:01:31.272 SO libspdk_accel_error.so.2.0 00:01:31.272 LIB libspdk_accel_iaa.a 00:01:31.272 SO libspdk_scheduler_dynamic.so.4.0 00:01:31.272 SO libspdk_accel_ioat.so.6.0 00:01:31.272 SYMLINK libspdk_keyring_file.so 00:01:31.272 SYMLINK libspdk_keyring_linux.so 00:01:31.272 SYMLINK libspdk_scheduler_gscheduler.so 00:01:31.272 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:31.272 SO libspdk_accel_iaa.so.3.0 00:01:31.530 SYMLINK libspdk_accel_error.so 00:01:31.530 SYMLINK libspdk_scheduler_dynamic.so 00:01:31.530 LIB libspdk_accel_dsa.a 00:01:31.530 LIB libspdk_blob_bdev.a 00:01:31.530 SYMLINK libspdk_accel_ioat.so 00:01:31.530 SO libspdk_blob_bdev.so.11.0 00:01:31.530 SO libspdk_accel_dsa.so.5.0 00:01:31.530 SYMLINK libspdk_accel_iaa.so 00:01:31.530 SYMLINK libspdk_blob_bdev.so 00:01:31.530 SYMLINK libspdk_accel_dsa.so 00:01:31.788 LIB libspdk_vfu_device.a 00:01:31.788 SO libspdk_vfu_device.so.3.0 00:01:31.788 CC module/bdev/malloc/bdev_malloc.o 00:01:31.788 CC module/bdev/error/vbdev_error.o 00:01:31.788 CC module/bdev/raid/bdev_raid.o 00:01:31.788 CC module/bdev/lvol/vbdev_lvol.o 00:01:31.788 CC module/bdev/nvme/bdev_nvme.o 00:01:31.789 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:31.789 CC module/bdev/error/vbdev_error_rpc.o 00:01:31.789 CC module/bdev/passthru/vbdev_passthru.o 00:01:31.789 CC module/bdev/raid/bdev_raid_rpc.o 00:01:31.789 CC module/blobfs/bdev/blobfs_bdev.o 00:01:31.789 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:31.789 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:31.789 CC module/bdev/raid/bdev_raid_sb.o 00:01:31.789 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:31.789 CC module/bdev/ftl/bdev_ftl.o 00:01:31.789 CC module/bdev/aio/bdev_aio.o 00:01:31.789 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:31.789 CC module/bdev/delay/vbdev_delay.o 00:01:31.789 CC module/bdev/split/vbdev_split.o 00:01:31.789 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:31.789 CC module/bdev/gpt/gpt.o 00:01:31.789 CC module/bdev/null/bdev_null.o 00:01:31.789 CC module/bdev/raid/raid0.o 00:01:31.789 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:31.789 CC module/bdev/split/vbdev_split_rpc.o 00:01:31.789 CC module/bdev/raid/raid1.o 00:01:31.789 CC module/bdev/gpt/vbdev_gpt.o 00:01:31.789 CC module/bdev/aio/bdev_aio_rpc.o 00:01:31.789 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:31.789 CC module/bdev/null/bdev_null_rpc.o 00:01:31.789 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:31.789 CC module/bdev/nvme/nvme_rpc.o 00:01:31.789 CC module/bdev/raid/concat.o 00:01:31.789 CC module/bdev/iscsi/bdev_iscsi.o 00:01:31.789 CC module/bdev/nvme/bdev_mdns_client.o 00:01:31.789 CC module/bdev/nvme/vbdev_opal.o 00:01:31.789 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:31.789 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:31.789 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:31.789 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:31.789 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:31.789 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:31.789 SYMLINK libspdk_vfu_device.so 00:01:32.047 LIB libspdk_sock_posix.a 00:01:32.047 SO libspdk_sock_posix.so.6.0 00:01:32.047 LIB libspdk_blobfs_bdev.a 00:01:32.305 SO libspdk_blobfs_bdev.so.6.0 00:01:32.305 SYMLINK libspdk_sock_posix.so 00:01:32.305 LIB libspdk_bdev_split.a 00:01:32.305 LIB libspdk_bdev_gpt.a 00:01:32.305 LIB libspdk_bdev_null.a 00:01:32.305 SYMLINK libspdk_blobfs_bdev.so 00:01:32.305 SO libspdk_bdev_split.so.6.0 00:01:32.305 LIB libspdk_bdev_error.a 00:01:32.305 LIB libspdk_bdev_ftl.a 00:01:32.305 SO libspdk_bdev_gpt.so.6.0 00:01:32.305 SO libspdk_bdev_null.so.6.0 00:01:32.305 LIB libspdk_bdev_iscsi.a 00:01:32.305 SO libspdk_bdev_error.so.6.0 00:01:32.305 SO libspdk_bdev_ftl.so.6.0 00:01:32.305 LIB libspdk_bdev_passthru.a 00:01:32.305 LIB libspdk_bdev_aio.a 00:01:32.305 SYMLINK libspdk_bdev_split.so 00:01:32.305 SO libspdk_bdev_iscsi.so.6.0 00:01:32.305 SO libspdk_bdev_passthru.so.6.0 00:01:32.305 SO libspdk_bdev_aio.so.6.0 00:01:32.305 SYMLINK libspdk_bdev_null.so 00:01:32.305 SYMLINK libspdk_bdev_gpt.so 00:01:32.305 SYMLINK libspdk_bdev_error.so 00:01:32.305 SYMLINK libspdk_bdev_ftl.so 00:01:32.305 LIB libspdk_bdev_delay.a 00:01:32.305 LIB libspdk_bdev_zone_block.a 00:01:32.305 SYMLINK libspdk_bdev_iscsi.so 00:01:32.305 SYMLINK libspdk_bdev_passthru.so 00:01:32.305 SYMLINK libspdk_bdev_aio.so 00:01:32.305 SO libspdk_bdev_delay.so.6.0 00:01:32.305 SO libspdk_bdev_zone_block.so.6.0 00:01:32.305 LIB libspdk_bdev_malloc.a 00:01:32.563 SO libspdk_bdev_malloc.so.6.0 00:01:32.563 SYMLINK libspdk_bdev_delay.so 00:01:32.563 SYMLINK libspdk_bdev_zone_block.so 00:01:32.563 SYMLINK libspdk_bdev_malloc.so 00:01:32.563 LIB libspdk_bdev_lvol.a 00:01:32.563 SO libspdk_bdev_lvol.so.6.0 00:01:32.563 LIB libspdk_bdev_virtio.a 00:01:32.564 SYMLINK libspdk_bdev_lvol.so 00:01:32.564 SO libspdk_bdev_virtio.so.6.0 00:01:32.564 SYMLINK libspdk_bdev_virtio.so 00:01:33.129 LIB libspdk_bdev_raid.a 00:01:33.129 SO libspdk_bdev_raid.so.6.0 00:01:33.129 SYMLINK libspdk_bdev_raid.so 00:01:34.116 LIB libspdk_bdev_nvme.a 00:01:34.116 SO libspdk_bdev_nvme.so.7.0 00:01:34.374 SYMLINK libspdk_bdev_nvme.so 00:01:34.632 CC module/event/subsystems/iobuf/iobuf.o 00:01:34.632 CC module/event/subsystems/vmd/vmd.o 00:01:34.632 CC module/event/subsystems/sock/sock.o 00:01:34.632 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:34.632 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:34.632 CC module/event/subsystems/scheduler/scheduler.o 00:01:34.632 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:34.632 CC module/event/subsystems/keyring/keyring.o 00:01:34.632 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:34.632 LIB libspdk_event_keyring.a 00:01:34.632 LIB libspdk_event_vhost_blk.a 00:01:34.890 LIB libspdk_event_vfu_tgt.a 00:01:34.891 LIB libspdk_event_vmd.a 00:01:34.891 LIB libspdk_event_scheduler.a 00:01:34.891 LIB libspdk_event_sock.a 00:01:34.891 LIB libspdk_event_iobuf.a 00:01:34.891 SO libspdk_event_vhost_blk.so.3.0 00:01:34.891 SO libspdk_event_keyring.so.1.0 00:01:34.891 SO libspdk_event_scheduler.so.4.0 00:01:34.891 SO libspdk_event_vfu_tgt.so.3.0 00:01:34.891 SO libspdk_event_sock.so.5.0 00:01:34.891 SO libspdk_event_vmd.so.6.0 00:01:34.891 SO libspdk_event_iobuf.so.3.0 00:01:34.891 SYMLINK libspdk_event_keyring.so 00:01:34.891 SYMLINK libspdk_event_vhost_blk.so 00:01:34.891 SYMLINK libspdk_event_sock.so 00:01:34.891 SYMLINK libspdk_event_vfu_tgt.so 00:01:34.891 SYMLINK libspdk_event_scheduler.so 00:01:34.891 SYMLINK libspdk_event_vmd.so 00:01:34.891 SYMLINK libspdk_event_iobuf.so 00:01:35.148 CC module/event/subsystems/accel/accel.o 00:01:35.148 LIB libspdk_event_accel.a 00:01:35.148 SO libspdk_event_accel.so.6.0 00:01:35.148 SYMLINK libspdk_event_accel.so 00:01:35.406 CC module/event/subsystems/bdev/bdev.o 00:01:35.665 LIB libspdk_event_bdev.a 00:01:35.665 SO libspdk_event_bdev.so.6.0 00:01:35.665 SYMLINK libspdk_event_bdev.so 00:01:35.922 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:35.922 CC module/event/subsystems/ublk/ublk.o 00:01:35.922 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:35.922 CC module/event/subsystems/scsi/scsi.o 00:01:35.922 CC module/event/subsystems/nbd/nbd.o 00:01:35.922 LIB libspdk_event_nbd.a 00:01:35.922 LIB libspdk_event_ublk.a 00:01:35.922 LIB libspdk_event_scsi.a 00:01:35.922 SO libspdk_event_nbd.so.6.0 00:01:35.922 SO libspdk_event_ublk.so.3.0 00:01:35.922 SO libspdk_event_scsi.so.6.0 00:01:35.922 SYMLINK libspdk_event_ublk.so 00:01:36.179 SYMLINK libspdk_event_nbd.so 00:01:36.179 SYMLINK libspdk_event_scsi.so 00:01:36.179 LIB libspdk_event_nvmf.a 00:01:36.179 SO libspdk_event_nvmf.so.6.0 00:01:36.179 SYMLINK libspdk_event_nvmf.so 00:01:36.179 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:36.179 CC module/event/subsystems/iscsi/iscsi.o 00:01:36.437 LIB libspdk_event_vhost_scsi.a 00:01:36.437 SO libspdk_event_vhost_scsi.so.3.0 00:01:36.437 LIB libspdk_event_iscsi.a 00:01:36.437 SO libspdk_event_iscsi.so.6.0 00:01:36.437 SYMLINK libspdk_event_vhost_scsi.so 00:01:36.437 SYMLINK libspdk_event_iscsi.so 00:01:36.696 SO libspdk.so.6.0 00:01:36.696 SYMLINK libspdk.so 00:01:36.696 CC test/rpc_client/rpc_client_test.o 00:01:36.696 TEST_HEADER include/spdk/accel.h 00:01:36.696 CC app/trace_record/trace_record.o 00:01:36.696 TEST_HEADER include/spdk/assert.h 00:01:36.696 TEST_HEADER include/spdk/accel_module.h 00:01:36.696 CXX app/trace/trace.o 00:01:36.696 TEST_HEADER include/spdk/barrier.h 00:01:36.696 TEST_HEADER include/spdk/base64.h 00:01:36.696 TEST_HEADER include/spdk/bdev.h 00:01:36.696 TEST_HEADER include/spdk/bdev_zone.h 00:01:36.696 TEST_HEADER include/spdk/bdev_module.h 00:01:36.696 CC app/spdk_top/spdk_top.o 00:01:36.696 TEST_HEADER include/spdk/bit_pool.h 00:01:36.696 TEST_HEADER include/spdk/bit_array.h 00:01:36.696 TEST_HEADER include/spdk/blob_bdev.h 00:01:36.696 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:36.696 CC app/spdk_nvme_perf/perf.o 00:01:36.696 CC app/spdk_nvme_identify/identify.o 00:01:36.696 CC app/spdk_lspci/spdk_lspci.o 00:01:36.696 TEST_HEADER include/spdk/blobfs.h 00:01:36.696 TEST_HEADER include/spdk/blob.h 00:01:36.696 TEST_HEADER include/spdk/conf.h 00:01:36.696 TEST_HEADER include/spdk/config.h 00:01:36.696 CC app/spdk_nvme_discover/discovery_aer.o 00:01:36.696 TEST_HEADER include/spdk/cpuset.h 00:01:36.696 TEST_HEADER include/spdk/crc16.h 00:01:36.696 TEST_HEADER include/spdk/crc32.h 00:01:36.696 TEST_HEADER include/spdk/crc64.h 00:01:36.696 TEST_HEADER include/spdk/dif.h 00:01:36.696 TEST_HEADER include/spdk/dma.h 00:01:36.696 TEST_HEADER include/spdk/endian.h 00:01:36.696 TEST_HEADER include/spdk/env_dpdk.h 00:01:36.696 TEST_HEADER include/spdk/env.h 00:01:36.696 TEST_HEADER include/spdk/event.h 00:01:36.696 TEST_HEADER include/spdk/fd_group.h 00:01:36.696 TEST_HEADER include/spdk/fd.h 00:01:36.696 TEST_HEADER include/spdk/ftl.h 00:01:36.696 TEST_HEADER include/spdk/file.h 00:01:36.696 TEST_HEADER include/spdk/gpt_spec.h 00:01:36.696 TEST_HEADER include/spdk/hexlify.h 00:01:36.696 TEST_HEADER include/spdk/histogram_data.h 00:01:36.696 TEST_HEADER include/spdk/idxd.h 00:01:36.696 TEST_HEADER include/spdk/idxd_spec.h 00:01:36.696 TEST_HEADER include/spdk/init.h 00:01:36.696 TEST_HEADER include/spdk/ioat.h 00:01:36.696 TEST_HEADER include/spdk/ioat_spec.h 00:01:36.696 TEST_HEADER include/spdk/iscsi_spec.h 00:01:36.696 TEST_HEADER include/spdk/json.h 00:01:36.696 TEST_HEADER include/spdk/jsonrpc.h 00:01:36.696 TEST_HEADER include/spdk/keyring.h 00:01:36.696 TEST_HEADER include/spdk/keyring_module.h 00:01:36.696 TEST_HEADER include/spdk/likely.h 00:01:36.696 TEST_HEADER include/spdk/log.h 00:01:36.696 TEST_HEADER include/spdk/lvol.h 00:01:36.696 TEST_HEADER include/spdk/mmio.h 00:01:36.696 TEST_HEADER include/spdk/memory.h 00:01:36.696 TEST_HEADER include/spdk/nbd.h 00:01:36.696 TEST_HEADER include/spdk/net.h 00:01:36.696 TEST_HEADER include/spdk/notify.h 00:01:36.696 TEST_HEADER include/spdk/nvme.h 00:01:36.696 TEST_HEADER include/spdk/nvme_intel.h 00:01:36.696 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:36.696 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:36.696 TEST_HEADER include/spdk/nvme_spec.h 00:01:36.696 TEST_HEADER include/spdk/nvme_zns.h 00:01:36.696 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:36.696 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:36.696 TEST_HEADER include/spdk/nvmf.h 00:01:36.696 TEST_HEADER include/spdk/nvmf_spec.h 00:01:36.696 TEST_HEADER include/spdk/nvmf_transport.h 00:01:36.696 TEST_HEADER include/spdk/opal.h 00:01:36.696 TEST_HEADER include/spdk/opal_spec.h 00:01:36.696 TEST_HEADER include/spdk/pci_ids.h 00:01:36.696 TEST_HEADER include/spdk/pipe.h 00:01:36.696 TEST_HEADER include/spdk/reduce.h 00:01:36.696 TEST_HEADER include/spdk/queue.h 00:01:36.696 TEST_HEADER include/spdk/rpc.h 00:01:36.696 TEST_HEADER include/spdk/scheduler.h 00:01:36.696 TEST_HEADER include/spdk/scsi.h 00:01:36.696 TEST_HEADER include/spdk/scsi_spec.h 00:01:36.696 TEST_HEADER include/spdk/sock.h 00:01:36.696 TEST_HEADER include/spdk/string.h 00:01:36.696 TEST_HEADER include/spdk/stdinc.h 00:01:36.696 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:36.696 TEST_HEADER include/spdk/thread.h 00:01:36.696 TEST_HEADER include/spdk/trace.h 00:01:36.696 TEST_HEADER include/spdk/tree.h 00:01:36.696 TEST_HEADER include/spdk/trace_parser.h 00:01:36.696 TEST_HEADER include/spdk/ublk.h 00:01:36.696 TEST_HEADER include/spdk/util.h 00:01:36.696 TEST_HEADER include/spdk/uuid.h 00:01:36.696 TEST_HEADER include/spdk/version.h 00:01:36.696 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:36.963 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:36.964 TEST_HEADER include/spdk/vhost.h 00:01:36.964 TEST_HEADER include/spdk/vmd.h 00:01:36.964 TEST_HEADER include/spdk/xor.h 00:01:36.964 TEST_HEADER include/spdk/zipf.h 00:01:36.964 CXX test/cpp_headers/accel.o 00:01:36.964 CXX test/cpp_headers/accel_module.o 00:01:36.964 CXX test/cpp_headers/assert.o 00:01:36.964 CXX test/cpp_headers/barrier.o 00:01:36.964 CXX test/cpp_headers/base64.o 00:01:36.964 CXX test/cpp_headers/bdev.o 00:01:36.964 CXX test/cpp_headers/bdev_module.o 00:01:36.964 CXX test/cpp_headers/bdev_zone.o 00:01:36.964 CXX test/cpp_headers/bit_array.o 00:01:36.964 CXX test/cpp_headers/bit_pool.o 00:01:36.964 CC app/spdk_dd/spdk_dd.o 00:01:36.964 CXX test/cpp_headers/blob_bdev.o 00:01:36.964 CXX test/cpp_headers/blobfs_bdev.o 00:01:36.964 CXX test/cpp_headers/blobfs.o 00:01:36.964 CXX test/cpp_headers/blob.o 00:01:36.964 CXX test/cpp_headers/conf.o 00:01:36.964 CXX test/cpp_headers/config.o 00:01:36.964 CXX test/cpp_headers/cpuset.o 00:01:36.964 CXX test/cpp_headers/crc16.o 00:01:36.964 CC app/iscsi_tgt/iscsi_tgt.o 00:01:36.964 CC app/nvmf_tgt/nvmf_main.o 00:01:36.964 CXX test/cpp_headers/crc32.o 00:01:36.964 CC app/spdk_tgt/spdk_tgt.o 00:01:36.964 CC examples/util/zipf/zipf.o 00:01:36.964 CC examples/ioat/verify/verify.o 00:01:36.964 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:36.964 CC test/thread/poller_perf/poller_perf.o 00:01:36.964 CC examples/ioat/perf/perf.o 00:01:36.964 CC test/env/pci/pci_ut.o 00:01:36.964 CC app/fio/nvme/fio_plugin.o 00:01:36.964 CC test/env/memory/memory_ut.o 00:01:36.964 CC test/app/stub/stub.o 00:01:36.964 CC test/app/jsoncat/jsoncat.o 00:01:36.964 CC test/app/histogram_perf/histogram_perf.o 00:01:36.964 CC test/env/vtophys/vtophys.o 00:01:36.964 CC test/dma/test_dma/test_dma.o 00:01:36.964 CC test/app/bdev_svc/bdev_svc.o 00:01:36.964 CC app/fio/bdev/fio_plugin.o 00:01:37.226 LINK spdk_lspci 00:01:37.227 CC test/env/mem_callbacks/mem_callbacks.o 00:01:37.227 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:37.227 LINK rpc_client_test 00:01:37.227 LINK spdk_nvme_discover 00:01:37.227 LINK interrupt_tgt 00:01:37.227 LINK jsoncat 00:01:37.227 LINK zipf 00:01:37.227 LINK histogram_perf 00:01:37.227 LINK env_dpdk_post_init 00:01:37.227 LINK poller_perf 00:01:37.227 LINK nvmf_tgt 00:01:37.227 CXX test/cpp_headers/crc64.o 00:01:37.227 CXX test/cpp_headers/dif.o 00:01:37.227 LINK vtophys 00:01:37.227 CXX test/cpp_headers/dma.o 00:01:37.227 CXX test/cpp_headers/endian.o 00:01:37.227 CXX test/cpp_headers/env_dpdk.o 00:01:37.227 CXX test/cpp_headers/env.o 00:01:37.227 CXX test/cpp_headers/event.o 00:01:37.227 CXX test/cpp_headers/fd_group.o 00:01:37.227 CXX test/cpp_headers/fd.o 00:01:37.227 LINK stub 00:01:37.227 CXX test/cpp_headers/file.o 00:01:37.227 CXX test/cpp_headers/ftl.o 00:01:37.227 LINK spdk_trace_record 00:01:37.489 LINK iscsi_tgt 00:01:37.490 CXX test/cpp_headers/gpt_spec.o 00:01:37.490 CXX test/cpp_headers/hexlify.o 00:01:37.490 CXX test/cpp_headers/histogram_data.o 00:01:37.490 CXX test/cpp_headers/idxd.o 00:01:37.490 LINK verify 00:01:37.490 LINK bdev_svc 00:01:37.490 CXX test/cpp_headers/idxd_spec.o 00:01:37.490 LINK spdk_tgt 00:01:37.490 LINK ioat_perf 00:01:37.490 CXX test/cpp_headers/init.o 00:01:37.490 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:37.490 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:37.490 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:37.490 CXX test/cpp_headers/ioat.o 00:01:37.490 LINK spdk_dd 00:01:37.490 CXX test/cpp_headers/ioat_spec.o 00:01:37.490 CXX test/cpp_headers/iscsi_spec.o 00:01:37.757 CXX test/cpp_headers/json.o 00:01:37.757 LINK spdk_trace 00:01:37.757 CXX test/cpp_headers/jsonrpc.o 00:01:37.757 CXX test/cpp_headers/keyring.o 00:01:37.757 CXX test/cpp_headers/keyring_module.o 00:01:37.757 CXX test/cpp_headers/likely.o 00:01:37.757 CXX test/cpp_headers/log.o 00:01:37.757 CXX test/cpp_headers/lvol.o 00:01:37.757 CXX test/cpp_headers/memory.o 00:01:37.757 CXX test/cpp_headers/mmio.o 00:01:37.757 CXX test/cpp_headers/nbd.o 00:01:37.757 CXX test/cpp_headers/net.o 00:01:37.757 CXX test/cpp_headers/notify.o 00:01:37.757 CXX test/cpp_headers/nvme.o 00:01:37.757 CXX test/cpp_headers/nvme_intel.o 00:01:37.757 LINK pci_ut 00:01:37.757 CXX test/cpp_headers/nvme_ocssd.o 00:01:37.757 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:37.757 CXX test/cpp_headers/nvme_spec.o 00:01:37.757 CXX test/cpp_headers/nvme_zns.o 00:01:37.757 CXX test/cpp_headers/nvmf_cmd.o 00:01:37.757 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:37.757 LINK test_dma 00:01:37.758 CXX test/cpp_headers/nvmf.o 00:01:37.758 CXX test/cpp_headers/nvmf_spec.o 00:01:37.758 CXX test/cpp_headers/nvmf_transport.o 00:01:37.758 CXX test/cpp_headers/opal.o 00:01:37.758 CXX test/cpp_headers/opal_spec.o 00:01:38.019 CC examples/sock/hello_world/hello_sock.o 00:01:38.019 CC test/event/event_perf/event_perf.o 00:01:38.019 CXX test/cpp_headers/pci_ids.o 00:01:38.019 CC test/event/reactor/reactor.o 00:01:38.019 CC examples/vmd/lsvmd/lsvmd.o 00:01:38.019 LINK spdk_nvme 00:01:38.019 LINK nvme_fuzz 00:01:38.019 CC examples/thread/thread/thread_ex.o 00:01:38.019 LINK spdk_bdev 00:01:38.019 CC examples/idxd/perf/perf.o 00:01:38.019 CXX test/cpp_headers/pipe.o 00:01:38.019 CC test/event/reactor_perf/reactor_perf.o 00:01:38.019 CXX test/cpp_headers/queue.o 00:01:38.019 CXX test/cpp_headers/reduce.o 00:01:38.019 CXX test/cpp_headers/rpc.o 00:01:38.019 CXX test/cpp_headers/scheduler.o 00:01:38.019 CXX test/cpp_headers/scsi.o 00:01:38.019 CXX test/cpp_headers/scsi_spec.o 00:01:38.019 CC examples/vmd/led/led.o 00:01:38.019 CC test/event/app_repeat/app_repeat.o 00:01:38.284 CXX test/cpp_headers/sock.o 00:01:38.284 CXX test/cpp_headers/stdinc.o 00:01:38.284 CXX test/cpp_headers/string.o 00:01:38.284 CXX test/cpp_headers/thread.o 00:01:38.284 CXX test/cpp_headers/trace.o 00:01:38.284 CXX test/cpp_headers/trace_parser.o 00:01:38.284 CXX test/cpp_headers/tree.o 00:01:38.284 CC test/event/scheduler/scheduler.o 00:01:38.284 CXX test/cpp_headers/ublk.o 00:01:38.284 CXX test/cpp_headers/util.o 00:01:38.284 CXX test/cpp_headers/uuid.o 00:01:38.284 CXX test/cpp_headers/version.o 00:01:38.284 CXX test/cpp_headers/vfio_user_pci.o 00:01:38.284 CC app/vhost/vhost.o 00:01:38.284 CXX test/cpp_headers/vfio_user_spec.o 00:01:38.284 CXX test/cpp_headers/vhost.o 00:01:38.284 CXX test/cpp_headers/vmd.o 00:01:38.284 CXX test/cpp_headers/xor.o 00:01:38.284 CXX test/cpp_headers/zipf.o 00:01:38.284 LINK event_perf 00:01:38.284 LINK lsvmd 00:01:38.284 LINK reactor 00:01:38.284 LINK spdk_nvme_perf 00:01:38.284 LINK reactor_perf 00:01:38.284 LINK mem_callbacks 00:01:38.284 LINK vhost_fuzz 00:01:38.542 LINK spdk_nvme_identify 00:01:38.542 LINK spdk_top 00:01:38.542 LINK hello_sock 00:01:38.542 LINK app_repeat 00:01:38.542 LINK led 00:01:38.542 LINK thread 00:01:38.542 CC test/nvme/overhead/overhead.o 00:01:38.542 CC test/nvme/aer/aer.o 00:01:38.542 CC test/nvme/e2edp/nvme_dp.o 00:01:38.542 CC test/nvme/startup/startup.o 00:01:38.542 CC test/nvme/reset/reset.o 00:01:38.542 CC test/nvme/err_injection/err_injection.o 00:01:38.542 CC test/nvme/sgl/sgl.o 00:01:38.542 CC test/accel/dif/dif.o 00:01:38.542 CC test/blobfs/mkfs/mkfs.o 00:01:38.542 LINK scheduler 00:01:38.542 CC test/nvme/reserve/reserve.o 00:01:38.542 LINK vhost 00:01:38.800 CC test/nvme/connect_stress/connect_stress.o 00:01:38.800 CC test/nvme/simple_copy/simple_copy.o 00:01:38.800 LINK idxd_perf 00:01:38.800 CC test/nvme/boot_partition/boot_partition.o 00:01:38.800 CC test/nvme/compliance/nvme_compliance.o 00:01:38.800 CC test/nvme/fused_ordering/fused_ordering.o 00:01:38.800 CC test/nvme/cuse/cuse.o 00:01:38.800 CC test/lvol/esnap/esnap.o 00:01:38.800 CC test/nvme/fdp/fdp.o 00:01:38.800 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:38.800 LINK err_injection 00:01:38.800 LINK connect_stress 00:01:38.800 LINK startup 00:01:38.800 LINK reserve 00:01:39.059 CC examples/nvme/abort/abort.o 00:01:39.059 CC examples/nvme/reconnect/reconnect.o 00:01:39.059 CC examples/nvme/hello_world/hello_world.o 00:01:39.059 CC examples/nvme/hotplug/hotplug.o 00:01:39.059 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:39.059 CC examples/nvme/arbitration/arbitration.o 00:01:39.059 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:39.059 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:39.059 LINK mkfs 00:01:39.059 LINK sgl 00:01:39.059 LINK nvme_dp 00:01:39.059 LINK doorbell_aers 00:01:39.059 LINK fused_ordering 00:01:39.059 LINK simple_copy 00:01:39.059 LINK overhead 00:01:39.059 LINK boot_partition 00:01:39.059 LINK aer 00:01:39.059 LINK fdp 00:01:39.059 LINK nvme_compliance 00:01:39.059 LINK reset 00:01:39.059 LINK memory_ut 00:01:39.316 LINK cmb_copy 00:01:39.316 CC examples/accel/perf/accel_perf.o 00:01:39.316 CC examples/blob/hello_world/hello_blob.o 00:01:39.316 CC examples/blob/cli/blobcli.o 00:01:39.316 LINK hotplug 00:01:39.316 LINK pmr_persistence 00:01:39.316 LINK hello_world 00:01:39.316 LINK arbitration 00:01:39.316 LINK abort 00:01:39.316 LINK dif 00:01:39.574 LINK reconnect 00:01:39.574 LINK hello_blob 00:01:39.574 LINK nvme_manage 00:01:39.574 LINK accel_perf 00:01:39.832 LINK blobcli 00:01:39.832 CC test/bdev/bdevio/bdevio.o 00:01:39.832 LINK iscsi_fuzz 00:01:40.089 CC examples/bdev/hello_world/hello_bdev.o 00:01:40.089 CC examples/bdev/bdevperf/bdevperf.o 00:01:40.089 LINK bdevio 00:01:40.347 LINK cuse 00:01:40.347 LINK hello_bdev 00:01:40.912 LINK bdevperf 00:01:41.170 CC examples/nvmf/nvmf/nvmf.o 00:01:41.428 LINK nvmf 00:01:43.955 LINK esnap 00:01:44.213 00:01:44.213 real 0m49.235s 00:01:44.213 user 10m7.633s 00:01:44.213 sys 2m31.621s 00:01:44.213 18:44:21 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:44.213 18:44:21 make -- common/autotest_common.sh@10 -- $ set +x 00:01:44.213 ************************************ 00:01:44.213 END TEST make 00:01:44.213 ************************************ 00:01:44.213 18:44:21 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:44.213 18:44:21 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:44.213 18:44:21 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:44.213 18:44:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.213 18:44:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:44.213 18:44:21 -- pm/common@44 -- $ pid=2939093 00:01:44.213 18:44:21 -- pm/common@50 -- $ kill -TERM 2939093 00:01:44.213 18:44:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.213 18:44:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:44.213 18:44:21 -- pm/common@44 -- $ pid=2939095 00:01:44.213 18:44:21 -- pm/common@50 -- $ kill -TERM 2939095 00:01:44.213 18:44:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.213 18:44:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:44.213 18:44:21 -- pm/common@44 -- $ pid=2939097 00:01:44.213 18:44:21 -- pm/common@50 -- $ kill -TERM 2939097 00:01:44.213 18:44:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.213 18:44:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:44.213 18:44:21 -- pm/common@44 -- $ pid=2939125 00:01:44.213 18:44:21 -- pm/common@50 -- $ sudo -E kill -TERM 2939125 00:01:44.213 18:44:21 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:44.213 18:44:21 -- nvmf/common.sh@7 -- # uname -s 00:01:44.213 18:44:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:44.214 18:44:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:44.214 18:44:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:44.214 18:44:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:44.214 18:44:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:44.214 18:44:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:44.214 18:44:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:44.214 18:44:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:44.214 18:44:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:44.214 18:44:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:44.214 18:44:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:01:44.214 18:44:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:01:44.214 18:44:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:44.214 18:44:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:44.214 18:44:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:44.214 18:44:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:44.214 18:44:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:44.214 18:44:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:44.214 18:44:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:44.214 18:44:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:44.214 18:44:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.214 18:44:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.214 18:44:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.214 18:44:21 -- paths/export.sh@5 -- # export PATH 00:01:44.214 18:44:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.214 18:44:21 -- nvmf/common.sh@47 -- # : 0 00:01:44.214 18:44:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:44.214 18:44:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:44.214 18:44:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:44.214 18:44:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:44.214 18:44:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:44.214 18:44:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:44.214 18:44:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:44.214 18:44:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:44.214 18:44:21 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:44.214 18:44:21 -- spdk/autotest.sh@32 -- # uname -s 00:01:44.214 18:44:21 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:44.214 18:44:21 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:44.214 18:44:21 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:44.214 18:44:21 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:44.214 18:44:21 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:44.214 18:44:21 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:44.214 18:44:21 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:44.214 18:44:21 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:44.214 18:44:21 -- spdk/autotest.sh@48 -- # udevadm_pid=2994588 00:01:44.214 18:44:21 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:44.214 18:44:21 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:44.214 18:44:21 -- pm/common@17 -- # local monitor 00:01:44.214 18:44:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.214 18:44:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.214 18:44:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.214 18:44:21 -- pm/common@21 -- # date +%s 00:01:44.214 18:44:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.214 18:44:21 -- pm/common@21 -- # date +%s 00:01:44.214 18:44:21 -- pm/common@25 -- # sleep 1 00:01:44.214 18:44:21 -- pm/common@21 -- # date +%s 00:01:44.214 18:44:21 -- pm/common@21 -- # date +%s 00:01:44.214 18:44:21 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721839461 00:01:44.214 18:44:21 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721839461 00:01:44.214 18:44:21 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721839461 00:01:44.214 18:44:21 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721839461 00:01:44.214 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721839461_collect-vmstat.pm.log 00:01:44.214 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721839461_collect-cpu-load.pm.log 00:01:44.214 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721839461_collect-cpu-temp.pm.log 00:01:44.214 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721839461_collect-bmc-pm.bmc.pm.log 00:01:45.150 18:44:22 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:45.150 18:44:22 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:45.150 18:44:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:01:45.150 18:44:22 -- common/autotest_common.sh@10 -- # set +x 00:01:45.150 18:44:22 -- spdk/autotest.sh@59 -- # create_test_list 00:01:45.150 18:44:22 -- common/autotest_common.sh@748 -- # xtrace_disable 00:01:45.150 18:44:22 -- common/autotest_common.sh@10 -- # set +x 00:01:45.408 18:44:22 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:45.408 18:44:22 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.408 18:44:22 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.408 18:44:22 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:45.408 18:44:22 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.408 18:44:22 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:45.408 18:44:22 -- common/autotest_common.sh@1455 -- # uname 00:01:45.408 18:44:22 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:01:45.408 18:44:22 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:45.408 18:44:22 -- common/autotest_common.sh@1475 -- # uname 00:01:45.408 18:44:22 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:01:45.408 18:44:22 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:45.408 18:44:22 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:45.408 18:44:22 -- spdk/autotest.sh@72 -- # hash lcov 00:01:45.408 18:44:22 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:45.408 18:44:22 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:45.408 --rc lcov_branch_coverage=1 00:01:45.408 --rc lcov_function_coverage=1 00:01:45.408 --rc genhtml_branch_coverage=1 00:01:45.408 --rc genhtml_function_coverage=1 00:01:45.408 --rc genhtml_legend=1 00:01:45.408 --rc geninfo_all_blocks=1 00:01:45.408 ' 00:01:45.408 18:44:22 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:45.408 --rc lcov_branch_coverage=1 00:01:45.408 --rc lcov_function_coverage=1 00:01:45.408 --rc genhtml_branch_coverage=1 00:01:45.408 --rc genhtml_function_coverage=1 00:01:45.408 --rc genhtml_legend=1 00:01:45.408 --rc geninfo_all_blocks=1 00:01:45.408 ' 00:01:45.408 18:44:22 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:45.408 --rc lcov_branch_coverage=1 00:01:45.408 --rc lcov_function_coverage=1 00:01:45.408 --rc genhtml_branch_coverage=1 00:01:45.408 --rc genhtml_function_coverage=1 00:01:45.408 --rc genhtml_legend=1 00:01:45.408 --rc geninfo_all_blocks=1 00:01:45.408 --no-external' 00:01:45.408 18:44:22 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:45.408 --rc lcov_branch_coverage=1 00:01:45.408 --rc lcov_function_coverage=1 00:01:45.408 --rc genhtml_branch_coverage=1 00:01:45.408 --rc genhtml_function_coverage=1 00:01:45.408 --rc genhtml_legend=1 00:01:45.408 --rc geninfo_all_blocks=1 00:01:45.408 --no-external' 00:01:45.408 18:44:22 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:45.408 lcov: LCOV version 1.14 00:01:45.408 18:44:22 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:00.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:00.283 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:15.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:15.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:15.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:15.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:15.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:15.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:15.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:15.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:15.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:15.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:15.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:15.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:15.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:15.202 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:15.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:15.203 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:15.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:15.204 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:19.390 18:44:56 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:19.390 18:44:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:19.390 18:44:56 -- common/autotest_common.sh@10 -- # set +x 00:02:19.390 18:44:56 -- spdk/autotest.sh@91 -- # rm -f 00:02:19.390 18:44:56 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:19.649 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:19.649 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:19.649 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:19.649 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:19.649 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:19.649 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:19.649 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:19.649 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:19.906 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:02:19.906 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:19.906 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:19.906 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:19.906 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:19.906 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:19.906 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:19.906 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:19.906 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:19.906 18:44:57 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:19.906 18:44:57 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:19.906 18:44:57 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:19.906 18:44:57 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:19.906 18:44:57 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:19.906 18:44:57 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:19.906 18:44:57 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:19.906 18:44:57 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:19.907 18:44:57 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:19.907 18:44:57 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:19.907 18:44:57 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:19.907 18:44:57 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:19.907 18:44:57 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:19.907 18:44:57 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:19.907 18:44:57 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:20.164 No valid GPT data, bailing 00:02:20.164 18:44:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:20.164 18:44:57 -- scripts/common.sh@391 -- # pt= 00:02:20.164 18:44:57 -- scripts/common.sh@392 -- # return 1 00:02:20.165 18:44:57 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:20.165 1+0 records in 00:02:20.165 1+0 records out 00:02:20.165 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00254457 s, 412 MB/s 00:02:20.165 18:44:57 -- spdk/autotest.sh@118 -- # sync 00:02:20.165 18:44:57 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:20.165 18:44:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:20.165 18:44:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:22.064 18:44:59 -- spdk/autotest.sh@124 -- # uname -s 00:02:22.064 18:44:59 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:22.064 18:44:59 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:22.064 18:44:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:22.064 18:44:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:22.064 18:44:59 -- common/autotest_common.sh@10 -- # set +x 00:02:22.064 ************************************ 00:02:22.064 START TEST setup.sh 00:02:22.064 ************************************ 00:02:22.064 18:44:59 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:22.064 * Looking for test storage... 00:02:22.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:22.064 18:44:59 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:22.064 18:44:59 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:22.064 18:44:59 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:22.064 18:44:59 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:22.064 18:44:59 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:22.064 18:44:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:22.064 ************************************ 00:02:22.064 START TEST acl 00:02:22.064 ************************************ 00:02:22.064 18:44:59 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:22.064 * Looking for test storage... 00:02:22.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:22.064 18:44:59 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:22.064 18:44:59 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:22.064 18:44:59 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:22.064 18:44:59 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:22.064 18:44:59 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:22.064 18:44:59 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:22.064 18:44:59 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:22.064 18:44:59 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:22.064 18:44:59 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:22.064 18:44:59 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:22.064 18:44:59 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:22.064 18:44:59 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:22.064 18:44:59 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:22.064 18:44:59 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:22.064 18:44:59 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:22.064 18:44:59 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:23.437 18:45:00 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:23.437 18:45:00 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:23.437 18:45:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.437 18:45:00 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:23.437 18:45:00 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:23.437 18:45:00 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:24.371 Hugepages 00:02:24.371 node hugesize free / total 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.371 00:02:24.371 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:24.371 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:24.372 18:45:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.372 18:45:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:24.372 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:24.372 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:24.372 18:45:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.372 18:45:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:24.372 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:24.372 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:24.372 18:45:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.630 18:45:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:0b:00.0 == *:*:*.* ]] 00:02:24.630 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:24.630 18:45:01 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:02:24.630 18:45:01 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:24.630 18:45:01 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:24.630 18:45:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.630 18:45:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:24.630 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:24.630 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:24.630 18:45:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.630 18:45:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:24.630 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:24.630 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:24.630 18:45:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.630 18:45:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:24.630 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:24.630 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:24.630 18:45:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.630 18:45:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:24.630 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:24.630 18:45:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:24.630 18:45:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.630 18:45:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:24.630 18:45:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:24.630 18:45:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:24.630 18:45:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.630 18:45:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:24.630 18:45:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:24.630 18:45:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:24.630 18:45:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.630 18:45:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:24.630 18:45:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:24.630 18:45:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:24.630 18:45:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.630 18:45:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:24.630 18:45:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:24.630 18:45:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:24.630 18:45:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.630 18:45:02 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:24.630 18:45:02 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:24.630 18:45:02 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:24.630 18:45:02 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:24.630 18:45:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:24.630 ************************************ 00:02:24.630 START TEST denied 00:02:24.630 ************************************ 00:02:24.630 18:45:02 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:02:24.630 18:45:02 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:0b:00.0' 00:02:24.630 18:45:02 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:24.630 18:45:02 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:0b:00.0' 00:02:24.630 18:45:02 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:24.630 18:45:02 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:26.002 0000:0b:00.0 (8086 0a54): Skipping denied controller at 0000:0b:00.0 00:02:26.002 18:45:03 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:0b:00.0 00:02:26.002 18:45:03 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:26.002 18:45:03 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:26.002 18:45:03 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:0b:00.0 ]] 00:02:26.002 18:45:03 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:0b:00.0/driver 00:02:26.002 18:45:03 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:26.002 18:45:03 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:26.002 18:45:03 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:26.002 18:45:03 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:26.002 18:45:03 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:28.528 00:02:28.528 real 0m3.879s 00:02:28.528 user 0m1.020s 00:02:28.528 sys 0m1.941s 00:02:28.528 18:45:05 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:28.528 18:45:05 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:28.528 ************************************ 00:02:28.528 END TEST denied 00:02:28.528 ************************************ 00:02:28.528 18:45:05 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:28.528 18:45:05 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:28.528 18:45:05 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:28.528 18:45:05 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:28.528 ************************************ 00:02:28.528 START TEST allowed 00:02:28.528 ************************************ 00:02:28.528 18:45:05 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:02:28.528 18:45:05 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:0b:00.0 00:02:28.528 18:45:05 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:28.528 18:45:05 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:0b:00.0 .*: nvme -> .*' 00:02:28.528 18:45:05 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:28.528 18:45:05 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:31.059 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:02:31.059 18:45:08 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:31.059 18:45:08 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:31.059 18:45:08 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:31.059 18:45:08 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:31.059 18:45:08 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:32.433 00:02:32.433 real 0m3.802s 00:02:32.433 user 0m1.026s 00:02:32.433 sys 0m1.654s 00:02:32.433 18:45:09 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:32.433 18:45:09 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:32.433 ************************************ 00:02:32.433 END TEST allowed 00:02:32.433 ************************************ 00:02:32.433 00:02:32.433 real 0m10.446s 00:02:32.433 user 0m3.180s 00:02:32.433 sys 0m5.289s 00:02:32.433 18:45:09 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:32.433 18:45:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:32.433 ************************************ 00:02:32.433 END TEST acl 00:02:32.433 ************************************ 00:02:32.433 18:45:09 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:32.433 18:45:09 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:32.433 18:45:09 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:32.433 18:45:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:32.433 ************************************ 00:02:32.433 START TEST hugepages 00:02:32.433 ************************************ 00:02:32.433 18:45:09 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:32.433 * Looking for test storage... 00:02:32.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39020484 kB' 'MemAvailable: 42938056 kB' 'Buffers: 2704 kB' 'Cached: 14605560 kB' 'SwapCached: 0 kB' 'Active: 11450412 kB' 'Inactive: 3693412 kB' 'Active(anon): 11010640 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538384 kB' 'Mapped: 176468 kB' 'Shmem: 10475080 kB' 'KReclaimable: 428824 kB' 'Slab: 816820 kB' 'SReclaimable: 428824 kB' 'SUnreclaim: 387996 kB' 'KernelStack: 12880 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 12148736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197000 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.433 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.434 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:32.435 18:45:09 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:32.435 18:45:09 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:32.435 18:45:09 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:32.435 18:45:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:32.435 ************************************ 00:02:32.435 START TEST default_setup 00:02:32.435 ************************************ 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:32.435 18:45:09 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:33.812 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:33.812 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:33.812 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:33.812 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:33.812 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:33.812 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:33.812 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:33.812 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:33.812 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:33.812 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:33.812 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:33.812 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:33.812 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:33.812 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:33.812 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:33.812 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:34.748 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41163680 kB' 'MemAvailable: 45081260 kB' 'Buffers: 2704 kB' 'Cached: 14605648 kB' 'SwapCached: 0 kB' 'Active: 11472716 kB' 'Inactive: 3693412 kB' 'Active(anon): 11032944 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561016 kB' 'Mapped: 177116 kB' 'Shmem: 10475168 kB' 'KReclaimable: 428832 kB' 'Slab: 816784 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 387952 kB' 'KernelStack: 12880 kB' 'PageTables: 8024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12175116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197176 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.012 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.013 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41161732 kB' 'MemAvailable: 45079312 kB' 'Buffers: 2704 kB' 'Cached: 14605652 kB' 'SwapCached: 0 kB' 'Active: 11473392 kB' 'Inactive: 3693412 kB' 'Active(anon): 11033620 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561660 kB' 'Mapped: 177076 kB' 'Shmem: 10475172 kB' 'KReclaimable: 428832 kB' 'Slab: 816712 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 387880 kB' 'KernelStack: 12768 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12176204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197132 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.014 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.015 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41159816 kB' 'MemAvailable: 45077396 kB' 'Buffers: 2704 kB' 'Cached: 14605664 kB' 'SwapCached: 0 kB' 'Active: 11470016 kB' 'Inactive: 3693412 kB' 'Active(anon): 11030244 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558264 kB' 'Mapped: 177000 kB' 'Shmem: 10475184 kB' 'KReclaimable: 428832 kB' 'Slab: 816704 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 387872 kB' 'KernelStack: 12784 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12172384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197160 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.016 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:35.018 nr_hugepages=1024 00:02:35.018 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:35.019 resv_hugepages=0 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:35.019 surplus_hugepages=0 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:35.019 anon_hugepages=0 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41158444 kB' 'MemAvailable: 45076024 kB' 'Buffers: 2704 kB' 'Cached: 14605688 kB' 'SwapCached: 0 kB' 'Active: 11473316 kB' 'Inactive: 3693412 kB' 'Active(anon): 11033544 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561512 kB' 'Mapped: 176996 kB' 'Shmem: 10475208 kB' 'KReclaimable: 428832 kB' 'Slab: 816704 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 387872 kB' 'KernelStack: 12768 kB' 'PageTables: 7848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12176248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197132 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.019 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:35.020 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 17916588 kB' 'MemUsed: 14913296 kB' 'SwapCached: 0 kB' 'Active: 8336984 kB' 'Inactive: 3337460 kB' 'Active(anon): 7981228 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3337460 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11393316 kB' 'Mapped: 122560 kB' 'AnonPages: 284340 kB' 'Shmem: 7700100 kB' 'KernelStack: 7544 kB' 'PageTables: 5248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 152356 kB' 'Slab: 336404 kB' 'SReclaimable: 152356 kB' 'SUnreclaim: 184048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.021 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:35.022 node0=1024 expecting 1024 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:35.022 00:02:35.022 real 0m2.542s 00:02:35.022 user 0m0.663s 00:02:35.022 sys 0m0.913s 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:35.022 18:45:12 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:35.022 ************************************ 00:02:35.022 END TEST default_setup 00:02:35.022 ************************************ 00:02:35.022 18:45:12 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:35.022 18:45:12 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:35.022 18:45:12 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:35.022 18:45:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:35.022 ************************************ 00:02:35.022 START TEST per_node_1G_alloc 00:02:35.022 ************************************ 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:35.022 18:45:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:36.463 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:36.463 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:36.463 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:36.463 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:36.463 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:36.463 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:36.463 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:36.463 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:36.463 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:36.463 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:36.463 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:36.463 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:36.463 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:36.463 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:36.463 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:36.463 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:36.463 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41148960 kB' 'MemAvailable: 45066540 kB' 'Buffers: 2704 kB' 'Cached: 14605760 kB' 'SwapCached: 0 kB' 'Active: 11468724 kB' 'Inactive: 3693412 kB' 'Active(anon): 11028952 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556896 kB' 'Mapped: 176636 kB' 'Shmem: 10475280 kB' 'KReclaimable: 428832 kB' 'Slab: 816636 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 387804 kB' 'KernelStack: 12800 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12170176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197128 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.463 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.464 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41148696 kB' 'MemAvailable: 45066276 kB' 'Buffers: 2704 kB' 'Cached: 14605764 kB' 'SwapCached: 0 kB' 'Active: 11468604 kB' 'Inactive: 3693412 kB' 'Active(anon): 11028832 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556688 kB' 'Mapped: 176572 kB' 'Shmem: 10475284 kB' 'KReclaimable: 428832 kB' 'Slab: 816636 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 387804 kB' 'KernelStack: 12800 kB' 'PageTables: 7940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12170196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.465 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.466 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41148444 kB' 'MemAvailable: 45066024 kB' 'Buffers: 2704 kB' 'Cached: 14605780 kB' 'SwapCached: 0 kB' 'Active: 11468396 kB' 'Inactive: 3693412 kB' 'Active(anon): 11028624 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556464 kB' 'Mapped: 176572 kB' 'Shmem: 10475300 kB' 'KReclaimable: 428832 kB' 'Slab: 816636 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 387804 kB' 'KernelStack: 12800 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12170216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.467 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.468 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:36.469 nr_hugepages=1024 00:02:36.469 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:36.469 resv_hugepages=0 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:36.470 surplus_hugepages=0 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:36.470 anon_hugepages=0 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41148324 kB' 'MemAvailable: 45065904 kB' 'Buffers: 2704 kB' 'Cached: 14605804 kB' 'SwapCached: 0 kB' 'Active: 11468600 kB' 'Inactive: 3693412 kB' 'Active(anon): 11028828 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556656 kB' 'Mapped: 176572 kB' 'Shmem: 10475324 kB' 'KReclaimable: 428832 kB' 'Slab: 816668 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 387836 kB' 'KernelStack: 12800 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12170240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.470 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:36.471 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 18965428 kB' 'MemUsed: 13864456 kB' 'SwapCached: 0 kB' 'Active: 8330896 kB' 'Inactive: 3337460 kB' 'Active(anon): 7975140 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3337460 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11393348 kB' 'Mapped: 122420 kB' 'AnonPages: 278112 kB' 'Shmem: 7700132 kB' 'KernelStack: 7528 kB' 'PageTables: 5168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 152356 kB' 'Slab: 336456 kB' 'SReclaimable: 152356 kB' 'SUnreclaim: 184100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.472 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 22183572 kB' 'MemUsed: 5528252 kB' 'SwapCached: 0 kB' 'Active: 3137744 kB' 'Inactive: 355952 kB' 'Active(anon): 3053728 kB' 'Inactive(anon): 0 kB' 'Active(file): 84016 kB' 'Inactive(file): 355952 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3215204 kB' 'Mapped: 54152 kB' 'AnonPages: 278584 kB' 'Shmem: 2775236 kB' 'KernelStack: 5288 kB' 'PageTables: 2836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 276476 kB' 'Slab: 480212 kB' 'SReclaimable: 276476 kB' 'SUnreclaim: 203736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.473 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.474 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:36.475 node0=512 expecting 512 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:36.475 node1=512 expecting 512 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:36.475 00:02:36.475 real 0m1.407s 00:02:36.475 user 0m0.582s 00:02:36.475 sys 0m0.784s 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:36.475 18:45:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:36.475 ************************************ 00:02:36.475 END TEST per_node_1G_alloc 00:02:36.475 ************************************ 00:02:36.475 18:45:13 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:36.475 18:45:13 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:36.475 18:45:13 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:36.475 18:45:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:36.475 ************************************ 00:02:36.475 START TEST even_2G_alloc 00:02:36.475 ************************************ 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:36.475 18:45:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:37.858 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:37.858 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:37.858 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:37.858 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:37.858 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:37.858 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:37.858 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:37.858 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:37.858 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:37.858 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:37.858 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:37.858 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:37.858 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:37.858 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:37.858 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:37.858 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:37.858 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41136148 kB' 'MemAvailable: 45053728 kB' 'Buffers: 2704 kB' 'Cached: 14605900 kB' 'SwapCached: 0 kB' 'Active: 11469020 kB' 'Inactive: 3693412 kB' 'Active(anon): 11029248 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556916 kB' 'Mapped: 176608 kB' 'Shmem: 10475420 kB' 'KReclaimable: 428832 kB' 'Slab: 816640 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 387808 kB' 'KernelStack: 12800 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12170448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197112 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.858 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41142268 kB' 'MemAvailable: 45059848 kB' 'Buffers: 2704 kB' 'Cached: 14605900 kB' 'SwapCached: 0 kB' 'Active: 11469908 kB' 'Inactive: 3693412 kB' 'Active(anon): 11030136 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557412 kB' 'Mapped: 176608 kB' 'Shmem: 10475420 kB' 'KReclaimable: 428832 kB' 'Slab: 816608 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 387776 kB' 'KernelStack: 12896 kB' 'PageTables: 8148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12170100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197112 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.859 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.860 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41142972 kB' 'MemAvailable: 45060552 kB' 'Buffers: 2704 kB' 'Cached: 14605920 kB' 'SwapCached: 0 kB' 'Active: 11469556 kB' 'Inactive: 3693412 kB' 'Active(anon): 11029784 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557432 kB' 'Mapped: 176640 kB' 'Shmem: 10475440 kB' 'KReclaimable: 428832 kB' 'Slab: 816688 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 387856 kB' 'KernelStack: 12864 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12170120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.861 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:37.862 nr_hugepages=1024 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:37.862 resv_hugepages=0 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:37.862 surplus_hugepages=0 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:37.862 anon_hugepages=0 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41143552 kB' 'MemAvailable: 45061132 kB' 'Buffers: 2704 kB' 'Cached: 14605924 kB' 'SwapCached: 0 kB' 'Active: 11468508 kB' 'Inactive: 3693412 kB' 'Active(anon): 11028736 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556376 kB' 'Mapped: 176580 kB' 'Shmem: 10475444 kB' 'KReclaimable: 428832 kB' 'Slab: 816688 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 387856 kB' 'KernelStack: 12832 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12170148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.862 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.863 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 18958556 kB' 'MemUsed: 13871328 kB' 'SwapCached: 0 kB' 'Active: 8329816 kB' 'Inactive: 3337460 kB' 'Active(anon): 7974060 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3337460 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11393412 kB' 'Mapped: 122428 kB' 'AnonPages: 276964 kB' 'Shmem: 7700196 kB' 'KernelStack: 7512 kB' 'PageTables: 5128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 152356 kB' 'Slab: 336280 kB' 'SReclaimable: 152356 kB' 'SUnreclaim: 183924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.864 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 22185388 kB' 'MemUsed: 5526436 kB' 'SwapCached: 0 kB' 'Active: 3139128 kB' 'Inactive: 355952 kB' 'Active(anon): 3055112 kB' 'Inactive(anon): 0 kB' 'Active(file): 84016 kB' 'Inactive(file): 355952 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3215260 kB' 'Mapped: 54152 kB' 'AnonPages: 279876 kB' 'Shmem: 2775292 kB' 'KernelStack: 5336 kB' 'PageTables: 2816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 276476 kB' 'Slab: 480408 kB' 'SReclaimable: 276476 kB' 'SUnreclaim: 203932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.865 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:37.866 node0=512 expecting 512 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:37.866 node1=512 expecting 512 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:37.866 00:02:37.866 real 0m1.328s 00:02:37.866 user 0m0.542s 00:02:37.866 sys 0m0.746s 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:37.866 18:45:15 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:37.866 ************************************ 00:02:37.866 END TEST even_2G_alloc 00:02:37.866 ************************************ 00:02:37.866 18:45:15 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:37.866 18:45:15 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:37.866 18:45:15 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:37.866 18:45:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:37.866 ************************************ 00:02:37.866 START TEST odd_alloc 00:02:37.866 ************************************ 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:37.866 18:45:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:39.245 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:39.245 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:39.245 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:39.245 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:39.245 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:39.245 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:39.245 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:39.245 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:39.245 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:39.245 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:39.245 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:39.245 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:39.245 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:39.245 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:39.245 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:39.245 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:39.245 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:39.245 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:39.245 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:39.245 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:39.245 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:39.245 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:39.245 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:39.245 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:39.245 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:39.245 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:39.245 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:39.245 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:39.245 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:39.245 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.245 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.245 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41137008 kB' 'MemAvailable: 45054588 kB' 'Buffers: 2704 kB' 'Cached: 14606036 kB' 'SwapCached: 0 kB' 'Active: 11465360 kB' 'Inactive: 3693412 kB' 'Active(anon): 11025588 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553260 kB' 'Mapped: 175648 kB' 'Shmem: 10475556 kB' 'KReclaimable: 428832 kB' 'Slab: 816652 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 387820 kB' 'KernelStack: 12816 kB' 'PageTables: 7724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 12155340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197112 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.246 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41136504 kB' 'MemAvailable: 45054084 kB' 'Buffers: 2704 kB' 'Cached: 14606040 kB' 'SwapCached: 0 kB' 'Active: 11465000 kB' 'Inactive: 3693412 kB' 'Active(anon): 11025228 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552912 kB' 'Mapped: 175624 kB' 'Shmem: 10475560 kB' 'KReclaimable: 428832 kB' 'Slab: 816652 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 387820 kB' 'KernelStack: 12800 kB' 'PageTables: 7676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 12155360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.247 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.248 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41136000 kB' 'MemAvailable: 45053580 kB' 'Buffers: 2704 kB' 'Cached: 14606044 kB' 'SwapCached: 0 kB' 'Active: 11465264 kB' 'Inactive: 3693412 kB' 'Active(anon): 11025492 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553132 kB' 'Mapped: 175548 kB' 'Shmem: 10475564 kB' 'KReclaimable: 428832 kB' 'Slab: 816628 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 387796 kB' 'KernelStack: 12800 kB' 'PageTables: 7672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 12155380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.249 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.250 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:39.251 nr_hugepages=1025 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:39.251 resv_hugepages=0 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:39.251 surplus_hugepages=0 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:39.251 anon_hugepages=0 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41136608 kB' 'MemAvailable: 45054188 kB' 'Buffers: 2704 kB' 'Cached: 14606076 kB' 'SwapCached: 0 kB' 'Active: 11464944 kB' 'Inactive: 3693412 kB' 'Active(anon): 11025172 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552796 kB' 'Mapped: 175548 kB' 'Shmem: 10475596 kB' 'KReclaimable: 428832 kB' 'Slab: 816628 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 387796 kB' 'KernelStack: 12816 kB' 'PageTables: 7716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 12155400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.251 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.252 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 18950460 kB' 'MemUsed: 13879424 kB' 'SwapCached: 0 kB' 'Active: 8328456 kB' 'Inactive: 3337460 kB' 'Active(anon): 7972700 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3337460 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11393536 kB' 'Mapped: 121856 kB' 'AnonPages: 275524 kB' 'Shmem: 7700320 kB' 'KernelStack: 7464 kB' 'PageTables: 4952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 152356 kB' 'Slab: 336312 kB' 'SReclaimable: 152356 kB' 'SUnreclaim: 183956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.253 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 22186148 kB' 'MemUsed: 5525676 kB' 'SwapCached: 0 kB' 'Active: 3136492 kB' 'Inactive: 355952 kB' 'Active(anon): 3052476 kB' 'Inactive(anon): 0 kB' 'Active(file): 84016 kB' 'Inactive(file): 355952 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3215268 kB' 'Mapped: 53692 kB' 'AnonPages: 277232 kB' 'Shmem: 2775300 kB' 'KernelStack: 5336 kB' 'PageTables: 2716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 276476 kB' 'Slab: 480316 kB' 'SReclaimable: 276476 kB' 'SUnreclaim: 203840 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.254 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.255 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:39.256 node0=512 expecting 513 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:39.256 node1=513 expecting 512 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:39.256 00:02:39.256 real 0m1.382s 00:02:39.256 user 0m0.600s 00:02:39.256 sys 0m0.742s 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:39.256 18:45:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:39.256 ************************************ 00:02:39.256 END TEST odd_alloc 00:02:39.256 ************************************ 00:02:39.256 18:45:16 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:39.256 18:45:16 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:39.256 18:45:16 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:39.256 18:45:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:39.256 ************************************ 00:02:39.256 START TEST custom_alloc 00:02:39.256 ************************************ 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:39.256 18:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:40.634 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:40.634 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:40.634 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:40.634 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:40.634 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:40.634 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:40.634 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:40.634 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:40.634 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:40.634 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:40.634 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:40.634 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:40.634 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:40.634 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:40.634 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:40.634 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:40.634 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:40.634 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:40.634 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:40.634 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:40.634 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:40.634 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:40.634 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:40.634 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:40.634 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:40.634 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:40.634 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:40.634 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:40.634 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:40.634 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:40.634 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.634 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.634 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.634 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40108040 kB' 'MemAvailable: 44025620 kB' 'Buffers: 2704 kB' 'Cached: 14606164 kB' 'SwapCached: 0 kB' 'Active: 11464988 kB' 'Inactive: 3693412 kB' 'Active(anon): 11025216 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552700 kB' 'Mapped: 175620 kB' 'Shmem: 10475684 kB' 'KReclaimable: 428832 kB' 'Slab: 816940 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 388108 kB' 'KernelStack: 12816 kB' 'PageTables: 7716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 12155228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197176 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.635 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40104884 kB' 'MemAvailable: 44022464 kB' 'Buffers: 2704 kB' 'Cached: 14606168 kB' 'SwapCached: 0 kB' 'Active: 11467788 kB' 'Inactive: 3693412 kB' 'Active(anon): 11028016 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555524 kB' 'Mapped: 176064 kB' 'Shmem: 10475688 kB' 'KReclaimable: 428832 kB' 'Slab: 816944 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 388112 kB' 'KernelStack: 12800 kB' 'PageTables: 7644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 12158952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197112 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.636 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.637 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40100856 kB' 'MemAvailable: 44018436 kB' 'Buffers: 2704 kB' 'Cached: 14606180 kB' 'SwapCached: 0 kB' 'Active: 11470472 kB' 'Inactive: 3693412 kB' 'Active(anon): 11030700 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558268 kB' 'Mapped: 176412 kB' 'Shmem: 10475700 kB' 'KReclaimable: 428832 kB' 'Slab: 817028 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 388196 kB' 'KernelStack: 12816 kB' 'PageTables: 7684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 12161756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197084 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.638 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.639 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:40.640 nr_hugepages=1536 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:40.640 resv_hugepages=0 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:40.640 surplus_hugepages=0 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:40.640 anon_hugepages=0 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40108352 kB' 'MemAvailable: 44025932 kB' 'Buffers: 2704 kB' 'Cached: 14606204 kB' 'SwapCached: 0 kB' 'Active: 11466408 kB' 'Inactive: 3693412 kB' 'Active(anon): 11026636 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554184 kB' 'Mapped: 176476 kB' 'Shmem: 10475724 kB' 'KReclaimable: 428832 kB' 'Slab: 817028 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 388196 kB' 'KernelStack: 12848 kB' 'PageTables: 7784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 12157408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.640 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.641 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 18972360 kB' 'MemUsed: 13857524 kB' 'SwapCached: 0 kB' 'Active: 8328632 kB' 'Inactive: 3337460 kB' 'Active(anon): 7972876 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3337460 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11393656 kB' 'Mapped: 122020 kB' 'AnonPages: 275720 kB' 'Shmem: 7700440 kB' 'KernelStack: 7480 kB' 'PageTables: 5024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 152356 kB' 'Slab: 336356 kB' 'SReclaimable: 152356 kB' 'SUnreclaim: 184000 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.642 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.643 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 21129192 kB' 'MemUsed: 6582632 kB' 'SwapCached: 0 kB' 'Active: 3142044 kB' 'Inactive: 355952 kB' 'Active(anon): 3058028 kB' 'Inactive(anon): 0 kB' 'Active(file): 84016 kB' 'Inactive(file): 355952 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3215276 kB' 'Mapped: 54632 kB' 'AnonPages: 282784 kB' 'Shmem: 2775308 kB' 'KernelStack: 5336 kB' 'PageTables: 2736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 276476 kB' 'Slab: 480672 kB' 'SReclaimable: 276476 kB' 'SUnreclaim: 204196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.644 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.645 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.645 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.645 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.645 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.645 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.645 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.645 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.645 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.645 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.645 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.645 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.645 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.645 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.645 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.645 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.645 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.645 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.645 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:40.904 node0=512 expecting 512 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:40.904 node1=1024 expecting 1024 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:40.904 00:02:40.904 real 0m1.413s 00:02:40.904 user 0m0.582s 00:02:40.904 sys 0m0.792s 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:40.904 18:45:18 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:40.904 ************************************ 00:02:40.904 END TEST custom_alloc 00:02:40.904 ************************************ 00:02:40.904 18:45:18 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:40.904 18:45:18 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:40.904 18:45:18 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:40.904 18:45:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:40.904 ************************************ 00:02:40.904 START TEST no_shrink_alloc 00:02:40.904 ************************************ 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:40.904 18:45:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:41.839 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:41.839 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:41.839 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:41.839 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:41.839 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:41.839 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:41.839 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:41.839 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:41.840 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:41.840 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:41.840 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:41.840 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:41.840 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:41.840 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:41.840 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:41.840 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:41.840 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41165920 kB' 'MemAvailable: 45083500 kB' 'Buffers: 2704 kB' 'Cached: 14606292 kB' 'SwapCached: 0 kB' 'Active: 11465384 kB' 'Inactive: 3693412 kB' 'Active(anon): 11025612 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553016 kB' 'Mapped: 175700 kB' 'Shmem: 10475812 kB' 'KReclaimable: 428832 kB' 'Slab: 817068 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 388236 kB' 'KernelStack: 12832 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12155724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.105 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.106 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41164648 kB' 'MemAvailable: 45082228 kB' 'Buffers: 2704 kB' 'Cached: 14606292 kB' 'SwapCached: 0 kB' 'Active: 11466500 kB' 'Inactive: 3693412 kB' 'Active(anon): 11026728 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554160 kB' 'Mapped: 175652 kB' 'Shmem: 10475812 kB' 'KReclaimable: 428832 kB' 'Slab: 817068 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 388236 kB' 'KernelStack: 12896 kB' 'PageTables: 7924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12173708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197112 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.107 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.108 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41165140 kB' 'MemAvailable: 45082720 kB' 'Buffers: 2704 kB' 'Cached: 14606312 kB' 'SwapCached: 0 kB' 'Active: 11465668 kB' 'Inactive: 3693412 kB' 'Active(anon): 11025896 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553284 kB' 'Mapped: 175576 kB' 'Shmem: 10475832 kB' 'KReclaimable: 428832 kB' 'Slab: 817068 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 388236 kB' 'KernelStack: 12832 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12155396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197064 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.109 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:42.110 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:42.111 nr_hugepages=1024 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:42.111 resv_hugepages=0 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:42.111 surplus_hugepages=0 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:42.111 anon_hugepages=0 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41165140 kB' 'MemAvailable: 45082720 kB' 'Buffers: 2704 kB' 'Cached: 14606352 kB' 'SwapCached: 0 kB' 'Active: 11464840 kB' 'Inactive: 3693412 kB' 'Active(anon): 11025068 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552388 kB' 'Mapped: 175576 kB' 'Shmem: 10475872 kB' 'KReclaimable: 428832 kB' 'Slab: 817068 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 388236 kB' 'KernelStack: 12768 kB' 'PageTables: 7504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12155548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197064 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.111 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:42.112 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 17926940 kB' 'MemUsed: 14902944 kB' 'SwapCached: 0 kB' 'Active: 8327976 kB' 'Inactive: 3337460 kB' 'Active(anon): 7972220 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3337460 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11393708 kB' 'Mapped: 121884 kB' 'AnonPages: 274804 kB' 'Shmem: 7700492 kB' 'KernelStack: 7448 kB' 'PageTables: 4856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 152356 kB' 'Slab: 336428 kB' 'SReclaimable: 152356 kB' 'SUnreclaim: 184072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.113 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:42.114 node0=1024 expecting 1024 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:42.114 18:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:43.489 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:43.489 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:43.489 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:43.489 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:43.489 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:43.489 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:43.489 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:43.489 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:43.489 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:43.489 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:43.489 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:43.489 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:43.489 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:43.489 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:43.489 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:43.489 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:43.489 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:43.489 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:43.489 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:43.489 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:43.489 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:43.489 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:43.489 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:43.489 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:43.489 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:43.489 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:43.489 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:43.489 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:43.489 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:43.489 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:43.489 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.489 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.489 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.489 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.489 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.489 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.489 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.489 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41167528 kB' 'MemAvailable: 45085108 kB' 'Buffers: 2704 kB' 'Cached: 14606404 kB' 'SwapCached: 0 kB' 'Active: 11466800 kB' 'Inactive: 3693412 kB' 'Active(anon): 11027028 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554376 kB' 'Mapped: 175676 kB' 'Shmem: 10475924 kB' 'KReclaimable: 428832 kB' 'Slab: 817380 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 388548 kB' 'KernelStack: 12816 kB' 'PageTables: 7624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12156168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197224 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:43.490 18:45:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41167712 kB' 'MemAvailable: 45085292 kB' 'Buffers: 2704 kB' 'Cached: 14606408 kB' 'SwapCached: 0 kB' 'Active: 11466404 kB' 'Inactive: 3693412 kB' 'Active(anon): 11026632 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553956 kB' 'Mapped: 175656 kB' 'Shmem: 10475928 kB' 'KReclaimable: 428832 kB' 'Slab: 817380 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 388548 kB' 'KernelStack: 12832 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12156184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197192 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.490 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41170616 kB' 'MemAvailable: 45088196 kB' 'Buffers: 2704 kB' 'Cached: 14606428 kB' 'SwapCached: 0 kB' 'Active: 11466300 kB' 'Inactive: 3693412 kB' 'Active(anon): 11026528 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553792 kB' 'Mapped: 175580 kB' 'Shmem: 10475948 kB' 'KReclaimable: 428832 kB' 'Slab: 817396 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 388564 kB' 'KernelStack: 12832 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12156208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197160 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.491 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:43.492 nr_hugepages=1024 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:43.492 resv_hugepages=0 00:02:43.492 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:43.492 surplus_hugepages=0 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:43.493 anon_hugepages=0 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41171052 kB' 'MemAvailable: 45088632 kB' 'Buffers: 2704 kB' 'Cached: 14606428 kB' 'SwapCached: 0 kB' 'Active: 11466300 kB' 'Inactive: 3693412 kB' 'Active(anon): 11026528 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553792 kB' 'Mapped: 175580 kB' 'Shmem: 10475948 kB' 'KReclaimable: 428832 kB' 'Slab: 817396 kB' 'SReclaimable: 428832 kB' 'SUnreclaim: 388564 kB' 'KernelStack: 12832 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12156228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197160 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 18036736 kB' 'DirectMap1G: 49283072 kB' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:43.493 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 17934744 kB' 'MemUsed: 14895140 kB' 'SwapCached: 0 kB' 'Active: 8329620 kB' 'Inactive: 3337460 kB' 'Active(anon): 7973864 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3337460 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11393780 kB' 'Mapped: 121888 kB' 'AnonPages: 276488 kB' 'Shmem: 7700564 kB' 'KernelStack: 7448 kB' 'PageTables: 4860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 152356 kB' 'Slab: 336676 kB' 'SReclaimable: 152356 kB' 'SUnreclaim: 184320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.494 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.753 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:43.754 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.754 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.754 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.754 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.754 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:43.754 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:43.754 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:43.754 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:43.754 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:43.754 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:43.754 node0=1024 expecting 1024 00:02:43.754 18:45:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:43.754 00:02:43.754 real 0m2.816s 00:02:43.754 user 0m1.193s 00:02:43.754 sys 0m1.545s 00:02:43.754 18:45:21 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:43.754 18:45:21 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:43.754 ************************************ 00:02:43.754 END TEST no_shrink_alloc 00:02:43.754 ************************************ 00:02:43.754 18:45:21 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:02:43.754 18:45:21 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:43.754 18:45:21 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:43.754 18:45:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:43.754 18:45:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:43.754 18:45:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:43.754 18:45:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:43.754 18:45:21 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:43.754 18:45:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:43.754 18:45:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:43.754 18:45:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:43.754 18:45:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:43.754 18:45:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:43.754 18:45:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:43.754 00:02:43.754 real 0m11.271s 00:02:43.754 user 0m4.322s 00:02:43.754 sys 0m5.764s 00:02:43.754 18:45:21 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:43.754 18:45:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:43.754 ************************************ 00:02:43.754 END TEST hugepages 00:02:43.754 ************************************ 00:02:43.754 18:45:21 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:43.754 18:45:21 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:43.754 18:45:21 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:43.754 18:45:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:43.754 ************************************ 00:02:43.754 START TEST driver 00:02:43.754 ************************************ 00:02:43.754 18:45:21 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:43.754 * Looking for test storage... 00:02:43.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:43.754 18:45:21 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:02:43.754 18:45:21 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:43.754 18:45:21 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:46.283 18:45:23 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:46.283 18:45:23 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:46.283 18:45:23 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:46.283 18:45:23 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:46.283 ************************************ 00:02:46.283 START TEST guess_driver 00:02:46.283 ************************************ 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:46.283 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:46.283 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:46.283 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:46.283 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:46.283 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:46.283 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:46.283 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:46.283 Looking for driver=vfio-pci 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:02:46.283 18:45:23 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.657 18:45:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.657 18:45:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.657 18:45:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.657 18:45:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:47.657 18:45:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:47.657 18:45:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:47.657 18:45:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:48.590 18:45:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:48.590 18:45:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:48.590 18:45:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:48.590 18:45:26 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:02:48.590 18:45:26 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:02:48.590 18:45:26 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:48.590 18:45:26 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:51.120 00:02:51.120 real 0m4.774s 00:02:51.120 user 0m1.080s 00:02:51.120 sys 0m1.794s 00:02:51.120 18:45:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:51.120 18:45:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:02:51.120 ************************************ 00:02:51.120 END TEST guess_driver 00:02:51.120 ************************************ 00:02:51.120 00:02:51.120 real 0m7.291s 00:02:51.120 user 0m1.644s 00:02:51.120 sys 0m2.754s 00:02:51.120 18:45:28 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:51.120 18:45:28 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:51.120 ************************************ 00:02:51.120 END TEST driver 00:02:51.120 ************************************ 00:02:51.120 18:45:28 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:51.120 18:45:28 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:51.120 18:45:28 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:51.120 18:45:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:51.120 ************************************ 00:02:51.120 START TEST devices 00:02:51.120 ************************************ 00:02:51.120 18:45:28 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:51.120 * Looking for test storage... 00:02:51.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:51.120 18:45:28 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:02:51.120 18:45:28 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:02:51.120 18:45:28 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:51.120 18:45:28 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:52.496 18:45:29 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:02:52.496 18:45:29 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:52.496 18:45:29 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:52.496 18:45:29 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:52.496 18:45:29 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:52.496 18:45:29 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:52.496 18:45:29 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:52.496 18:45:29 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:52.496 18:45:29 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:52.496 18:45:29 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:02:52.496 18:45:29 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:02:52.496 18:45:29 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:02:52.496 18:45:29 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:02:52.496 18:45:29 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:02:52.496 18:45:29 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:02:52.496 18:45:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:02:52.496 18:45:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:02:52.496 18:45:29 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:0b:00.0 00:02:52.496 18:45:29 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:02:52.496 18:45:29 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:02:52.496 18:45:29 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:02:52.496 18:45:29 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:02:52.496 No valid GPT data, bailing 00:02:52.496 18:45:30 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:52.496 18:45:30 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:02:52.496 18:45:30 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:02:52.496 18:45:30 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:02:52.496 18:45:30 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:02:52.496 18:45:30 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:02:52.496 18:45:30 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:02:52.496 18:45:30 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:02:52.496 18:45:30 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:02:52.496 18:45:30 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:0b:00.0 00:02:52.496 18:45:30 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:02:52.496 18:45:30 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:02:52.496 18:45:30 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:02:52.496 18:45:30 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:52.496 18:45:30 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:52.496 18:45:30 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:02:52.496 ************************************ 00:02:52.496 START TEST nvme_mount 00:02:52.496 ************************************ 00:02:52.496 18:45:30 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:02:52.496 18:45:30 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:02:52.496 18:45:30 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:02:52.496 18:45:30 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:52.496 18:45:30 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:52.496 18:45:30 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:02:52.496 18:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:52.496 18:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:02:52.497 18:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:02:52.497 18:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:52.497 18:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:02:52.497 18:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:02:52.497 18:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:02:52.497 18:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:52.497 18:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:52.497 18:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:52.497 18:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:52.497 18:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:52.497 18:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:52.497 18:45:30 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:02:53.876 Creating new GPT entries in memory. 00:02:53.876 GPT data structures destroyed! You may now partition the disk using fdisk or 00:02:53.876 other utilities. 00:02:53.876 18:45:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:02:53.876 18:45:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:53.876 18:45:31 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:53.876 18:45:31 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:53.876 18:45:31 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:02:54.820 Creating new GPT entries in memory. 00:02:54.820 The operation has completed successfully. 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3015160 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:54.820 18:45:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:55.782 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:55.782 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:55.783 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.041 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:56.041 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:56.041 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.041 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:56.041 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:56.041 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:02:56.041 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.041 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.041 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:56.041 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:02:56.041 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:56.041 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:56.041 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:56.299 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:02:56.299 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:02:56.299 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:02:56.299 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:02:56.299 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:02:56.299 18:45:33 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:02:56.299 18:45:33 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.299 18:45:33 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:02:56.299 18:45:33 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:02:56.299 18:45:33 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.299 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:56.299 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:02:56.299 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:02:56.299 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.299 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:56.299 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:56.299 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:56.299 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:02:56.300 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:56.300 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.300 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:02:56.300 18:45:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:56.300 18:45:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.300 18:45:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:57.234 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.493 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:57.493 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:57.493 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.493 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:57.493 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:57.493 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.493 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:0b:00.0 data@nvme0n1 '' '' 00:02:57.493 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:02:57.493 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:02:57.493 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:02:57.493 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:02:57.493 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:57.493 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:02:57.493 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:57.493 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.493 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:02:57.493 18:45:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:57.493 18:45:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.493 18:45:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:58.427 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.427 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.427 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.427 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:02:58.428 18:45:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.687 18:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:58.687 18:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:02:58.687 18:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:02:58.687 18:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:02:58.687 18:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:58.687 18:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:58.687 18:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:58.687 18:45:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:58.687 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:58.687 00:02:58.687 real 0m6.115s 00:02:58.687 user 0m1.365s 00:02:58.687 sys 0m2.264s 00:02:58.687 18:45:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:58.687 18:45:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:02:58.687 ************************************ 00:02:58.687 END TEST nvme_mount 00:02:58.687 ************************************ 00:02:58.687 18:45:36 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:02:58.687 18:45:36 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:58.687 18:45:36 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:58.687 18:45:36 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:02:58.687 ************************************ 00:02:58.687 START TEST dm_mount 00:02:58.687 ************************************ 00:02:58.687 18:45:36 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:02:58.687 18:45:36 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:02:58.687 18:45:36 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:02:58.687 18:45:36 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:02:58.687 18:45:36 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:02:58.687 18:45:36 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:58.687 18:45:36 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:02:58.687 18:45:36 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:02:58.687 18:45:36 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:58.687 18:45:36 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:02:58.687 18:45:36 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:02:58.687 18:45:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:02:58.687 18:45:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:58.687 18:45:36 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:58.687 18:45:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:58.687 18:45:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:58.687 18:45:36 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:58.687 18:45:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:58.687 18:45:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:58.687 18:45:36 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:58.687 18:45:36 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:58.687 18:45:36 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:00.066 Creating new GPT entries in memory. 00:03:00.066 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:00.066 other utilities. 00:03:00.066 18:45:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:00.066 18:45:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:00.066 18:45:37 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:00.066 18:45:37 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:00.066 18:45:37 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:01.003 Creating new GPT entries in memory. 00:03:01.003 The operation has completed successfully. 00:03:01.003 18:45:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:01.003 18:45:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:01.003 18:45:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:01.003 18:45:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:01.003 18:45:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:01.938 The operation has completed successfully. 00:03:01.938 18:45:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:01.938 18:45:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:01.938 18:45:39 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3017482 00:03:01.938 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:01.938 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:01.938 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:01.938 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:01.938 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:01.938 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:01.938 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:01.938 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:0b:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.939 18:45:39 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:02.875 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.134 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:03.134 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:03.134 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:03.134 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:03.134 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:03.134 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:03.134 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:0b:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:03.134 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:03.134 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:03.134 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:03.134 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:03.134 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:03.134 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:03.134 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:03.134 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.134 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:03.134 18:45:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:03.134 18:45:40 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.134 18:45:40 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:04.511 18:45:41 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:04.512 18:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:04.512 18:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:04.512 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:04.512 18:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:04.512 18:45:42 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:04.512 00:03:04.512 real 0m5.790s 00:03:04.512 user 0m0.971s 00:03:04.512 sys 0m1.665s 00:03:04.512 18:45:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:04.512 18:45:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:04.512 ************************************ 00:03:04.512 END TEST dm_mount 00:03:04.512 ************************************ 00:03:04.512 18:45:42 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:04.512 18:45:42 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:04.512 18:45:42 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:04.512 18:45:42 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:04.512 18:45:42 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:04.512 18:45:42 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:04.512 18:45:42 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:04.771 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:04.771 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:04.771 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:04.771 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:04.771 18:45:42 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:04.771 18:45:42 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:04.771 18:45:42 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:04.771 18:45:42 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:04.771 18:45:42 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:04.771 18:45:42 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:04.771 18:45:42 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:04.772 00:03:04.772 real 0m13.815s 00:03:04.772 user 0m3.006s 00:03:04.772 sys 0m4.931s 00:03:04.772 18:45:42 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:04.772 18:45:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:04.772 ************************************ 00:03:04.772 END TEST devices 00:03:04.772 ************************************ 00:03:04.772 00:03:04.772 real 0m43.059s 00:03:04.772 user 0m12.245s 00:03:04.772 sys 0m18.898s 00:03:04.772 18:45:42 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:04.772 18:45:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:04.772 ************************************ 00:03:04.772 END TEST setup.sh 00:03:04.772 ************************************ 00:03:04.772 18:45:42 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:06.148 Hugepages 00:03:06.148 node hugesize free / total 00:03:06.148 node0 1048576kB 0 / 0 00:03:06.148 node0 2048kB 2048 / 2048 00:03:06.148 node1 1048576kB 0 / 0 00:03:06.148 node1 2048kB 0 / 0 00:03:06.148 00:03:06.148 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:06.148 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:06.148 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:06.148 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:06.148 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:06.148 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:06.148 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:06.148 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:06.148 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:06.148 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:06.148 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:06.148 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:06.148 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:06.148 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:06.148 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:06.148 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:06.148 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:06.148 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:06.148 18:45:43 -- spdk/autotest.sh@130 -- # uname -s 00:03:06.148 18:45:43 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:06.148 18:45:43 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:06.148 18:45:43 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:07.525 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:07.525 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:07.525 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:07.525 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:07.525 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:07.525 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:07.525 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:07.525 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:07.525 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:07.525 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:07.525 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:07.525 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:07.525 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:07.525 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:07.525 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:07.525 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:08.461 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:08.720 18:45:46 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:09.658 18:45:47 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:09.658 18:45:47 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:09.658 18:45:47 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:09.658 18:45:47 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:09.658 18:45:47 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:09.658 18:45:47 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:09.658 18:45:47 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:09.658 18:45:47 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:09.658 18:45:47 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:09.658 18:45:47 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:09.658 18:45:47 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:03:09.658 18:45:47 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.034 Waiting for block devices as requested 00:03:11.034 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:11.034 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:11.034 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:11.034 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:11.034 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:11.292 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:11.292 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:11.292 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:11.292 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:03:11.552 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:11.552 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:11.811 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:11.811 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:11.811 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:11.811 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:12.068 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:12.068 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:12.068 18:45:49 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:12.068 18:45:49 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:03:12.068 18:45:49 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:12.069 18:45:49 -- common/autotest_common.sh@1502 -- # grep 0000:0b:00.0/nvme/nvme 00:03:12.069 18:45:49 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:03:12.069 18:45:49 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:03:12.069 18:45:49 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:03:12.069 18:45:49 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:12.069 18:45:49 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:12.069 18:45:49 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:12.069 18:45:49 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:12.069 18:45:49 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:12.069 18:45:49 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:12.327 18:45:49 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:12.327 18:45:49 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:12.327 18:45:49 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:12.327 18:45:49 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:12.327 18:45:49 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:12.327 18:45:49 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:12.327 18:45:49 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:12.327 18:45:49 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:12.327 18:45:49 -- common/autotest_common.sh@1557 -- # continue 00:03:12.327 18:45:49 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:12.327 18:45:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:12.327 18:45:49 -- common/autotest_common.sh@10 -- # set +x 00:03:12.327 18:45:49 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:12.327 18:45:49 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:12.327 18:45:49 -- common/autotest_common.sh@10 -- # set +x 00:03:12.327 18:45:49 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:13.264 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:13.264 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:13.523 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:13.523 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:13.523 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:13.523 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:13.523 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:13.523 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:13.523 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:13.523 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:13.523 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:13.523 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:13.523 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:13.523 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:13.523 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:13.523 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:14.471 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:14.730 18:45:52 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:14.730 18:45:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:14.730 18:45:52 -- common/autotest_common.sh@10 -- # set +x 00:03:14.730 18:45:52 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:14.730 18:45:52 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:14.730 18:45:52 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:14.730 18:45:52 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:14.730 18:45:52 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:14.730 18:45:52 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:14.730 18:45:52 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:14.730 18:45:52 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:14.730 18:45:52 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:14.730 18:45:52 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:14.730 18:45:52 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:14.730 18:45:52 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:14.730 18:45:52 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:03:14.730 18:45:52 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:14.730 18:45:52 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:03:14.730 18:45:52 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:14.730 18:45:52 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:14.730 18:45:52 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:14.730 18:45:52 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:0b:00.0 00:03:14.730 18:45:52 -- common/autotest_common.sh@1592 -- # [[ -z 0000:0b:00.0 ]] 00:03:14.730 18:45:52 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3022734 00:03:14.730 18:45:52 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:14.730 18:45:52 -- common/autotest_common.sh@1598 -- # waitforlisten 3022734 00:03:14.730 18:45:52 -- common/autotest_common.sh@831 -- # '[' -z 3022734 ']' 00:03:14.730 18:45:52 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:14.730 18:45:52 -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:14.730 18:45:52 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:14.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:14.730 18:45:52 -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:14.730 18:45:52 -- common/autotest_common.sh@10 -- # set +x 00:03:14.730 [2024-07-24 18:45:52.222748] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:03:14.730 [2024-07-24 18:45:52.222824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3022734 ] 00:03:14.730 EAL: No free 2048 kB hugepages reported on node 1 00:03:14.730 [2024-07-24 18:45:52.285379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:14.988 [2024-07-24 18:45:52.404652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:15.246 18:45:52 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:15.246 18:45:52 -- common/autotest_common.sh@864 -- # return 0 00:03:15.246 18:45:52 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:15.246 18:45:52 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:15.246 18:45:52 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:03:18.554 nvme0n1 00:03:18.554 18:45:55 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:18.554 [2024-07-24 18:45:55.997508] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:18.554 [2024-07-24 18:45:55.997558] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:18.554 request: 00:03:18.554 { 00:03:18.554 "nvme_ctrlr_name": "nvme0", 00:03:18.554 "password": "test", 00:03:18.554 "method": "bdev_nvme_opal_revert", 00:03:18.554 "req_id": 1 00:03:18.554 } 00:03:18.554 Got JSON-RPC error response 00:03:18.554 response: 00:03:18.554 { 00:03:18.554 "code": -32603, 00:03:18.554 "message": "Internal error" 00:03:18.554 } 00:03:18.554 18:45:56 -- common/autotest_common.sh@1604 -- # true 00:03:18.554 18:45:56 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:18.554 18:45:56 -- common/autotest_common.sh@1608 -- # killprocess 3022734 00:03:18.554 18:45:56 -- common/autotest_common.sh@950 -- # '[' -z 3022734 ']' 00:03:18.554 18:45:56 -- common/autotest_common.sh@954 -- # kill -0 3022734 00:03:18.554 18:45:56 -- common/autotest_common.sh@955 -- # uname 00:03:18.554 18:45:56 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:18.554 18:45:56 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3022734 00:03:18.554 18:45:56 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:18.554 18:45:56 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:18.554 18:45:56 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3022734' 00:03:18.554 killing process with pid 3022734 00:03:18.554 18:45:56 -- common/autotest_common.sh@969 -- # kill 3022734 00:03:18.554 18:45:56 -- common/autotest_common.sh@974 -- # wait 3022734 00:03:20.451 18:45:57 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:20.451 18:45:57 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:20.451 18:45:57 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:20.451 18:45:57 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:20.451 18:45:57 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:20.451 18:45:57 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:20.451 18:45:57 -- common/autotest_common.sh@10 -- # set +x 00:03:20.451 18:45:57 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:20.451 18:45:57 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:20.451 18:45:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:20.451 18:45:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:20.451 18:45:57 -- common/autotest_common.sh@10 -- # set +x 00:03:20.451 ************************************ 00:03:20.451 START TEST env 00:03:20.451 ************************************ 00:03:20.451 18:45:57 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:20.451 * Looking for test storage... 00:03:20.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:20.451 18:45:57 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:20.451 18:45:57 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:20.451 18:45:57 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:20.451 18:45:57 env -- common/autotest_common.sh@10 -- # set +x 00:03:20.451 ************************************ 00:03:20.451 START TEST env_memory 00:03:20.451 ************************************ 00:03:20.451 18:45:57 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:20.451 00:03:20.451 00:03:20.451 CUnit - A unit testing framework for C - Version 2.1-3 00:03:20.451 http://cunit.sourceforge.net/ 00:03:20.451 00:03:20.451 00:03:20.451 Suite: memory 00:03:20.451 Test: alloc and free memory map ...[2024-07-24 18:45:57.961310] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:20.451 passed 00:03:20.451 Test: mem map translation ...[2024-07-24 18:45:57.981691] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:20.451 [2024-07-24 18:45:57.981712] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:20.451 [2024-07-24 18:45:57.981768] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:20.451 [2024-07-24 18:45:57.981780] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:20.451 passed 00:03:20.451 Test: mem map registration ...[2024-07-24 18:45:58.022963] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:20.451 [2024-07-24 18:45:58.022983] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:20.451 passed 00:03:20.709 Test: mem map adjacent registrations ...passed 00:03:20.709 00:03:20.709 Run Summary: Type Total Ran Passed Failed Inactive 00:03:20.709 suites 1 1 n/a 0 0 00:03:20.709 tests 4 4 4 0 0 00:03:20.709 asserts 152 152 152 0 n/a 00:03:20.709 00:03:20.709 Elapsed time = 0.142 seconds 00:03:20.709 00:03:20.709 real 0m0.151s 00:03:20.709 user 0m0.144s 00:03:20.709 sys 0m0.006s 00:03:20.709 18:45:58 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:20.709 18:45:58 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:20.709 ************************************ 00:03:20.709 END TEST env_memory 00:03:20.709 ************************************ 00:03:20.709 18:45:58 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:20.709 18:45:58 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:20.709 18:45:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:20.709 18:45:58 env -- common/autotest_common.sh@10 -- # set +x 00:03:20.710 ************************************ 00:03:20.710 START TEST env_vtophys 00:03:20.710 ************************************ 00:03:20.710 18:45:58 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:20.710 EAL: lib.eal log level changed from notice to debug 00:03:20.710 EAL: Detected lcore 0 as core 0 on socket 0 00:03:20.710 EAL: Detected lcore 1 as core 1 on socket 0 00:03:20.710 EAL: Detected lcore 2 as core 2 on socket 0 00:03:20.710 EAL: Detected lcore 3 as core 3 on socket 0 00:03:20.710 EAL: Detected lcore 4 as core 4 on socket 0 00:03:20.710 EAL: Detected lcore 5 as core 5 on socket 0 00:03:20.710 EAL: Detected lcore 6 as core 8 on socket 0 00:03:20.710 EAL: Detected lcore 7 as core 9 on socket 0 00:03:20.710 EAL: Detected lcore 8 as core 10 on socket 0 00:03:20.710 EAL: Detected lcore 9 as core 11 on socket 0 00:03:20.710 EAL: Detected lcore 10 as core 12 on socket 0 00:03:20.710 EAL: Detected lcore 11 as core 13 on socket 0 00:03:20.710 EAL: Detected lcore 12 as core 0 on socket 1 00:03:20.710 EAL: Detected lcore 13 as core 1 on socket 1 00:03:20.710 EAL: Detected lcore 14 as core 2 on socket 1 00:03:20.710 EAL: Detected lcore 15 as core 3 on socket 1 00:03:20.710 EAL: Detected lcore 16 as core 4 on socket 1 00:03:20.710 EAL: Detected lcore 17 as core 5 on socket 1 00:03:20.710 EAL: Detected lcore 18 as core 8 on socket 1 00:03:20.710 EAL: Detected lcore 19 as core 9 on socket 1 00:03:20.710 EAL: Detected lcore 20 as core 10 on socket 1 00:03:20.710 EAL: Detected lcore 21 as core 11 on socket 1 00:03:20.710 EAL: Detected lcore 22 as core 12 on socket 1 00:03:20.710 EAL: Detected lcore 23 as core 13 on socket 1 00:03:20.710 EAL: Detected lcore 24 as core 0 on socket 0 00:03:20.710 EAL: Detected lcore 25 as core 1 on socket 0 00:03:20.710 EAL: Detected lcore 26 as core 2 on socket 0 00:03:20.710 EAL: Detected lcore 27 as core 3 on socket 0 00:03:20.710 EAL: Detected lcore 28 as core 4 on socket 0 00:03:20.710 EAL: Detected lcore 29 as core 5 on socket 0 00:03:20.710 EAL: Detected lcore 30 as core 8 on socket 0 00:03:20.710 EAL: Detected lcore 31 as core 9 on socket 0 00:03:20.710 EAL: Detected lcore 32 as core 10 on socket 0 00:03:20.710 EAL: Detected lcore 33 as core 11 on socket 0 00:03:20.710 EAL: Detected lcore 34 as core 12 on socket 0 00:03:20.710 EAL: Detected lcore 35 as core 13 on socket 0 00:03:20.710 EAL: Detected lcore 36 as core 0 on socket 1 00:03:20.710 EAL: Detected lcore 37 as core 1 on socket 1 00:03:20.710 EAL: Detected lcore 38 as core 2 on socket 1 00:03:20.710 EAL: Detected lcore 39 as core 3 on socket 1 00:03:20.710 EAL: Detected lcore 40 as core 4 on socket 1 00:03:20.710 EAL: Detected lcore 41 as core 5 on socket 1 00:03:20.710 EAL: Detected lcore 42 as core 8 on socket 1 00:03:20.710 EAL: Detected lcore 43 as core 9 on socket 1 00:03:20.710 EAL: Detected lcore 44 as core 10 on socket 1 00:03:20.710 EAL: Detected lcore 45 as core 11 on socket 1 00:03:20.710 EAL: Detected lcore 46 as core 12 on socket 1 00:03:20.710 EAL: Detected lcore 47 as core 13 on socket 1 00:03:20.710 EAL: Maximum logical cores by configuration: 128 00:03:20.710 EAL: Detected CPU lcores: 48 00:03:20.710 EAL: Detected NUMA nodes: 2 00:03:20.710 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:20.710 EAL: Detected shared linkage of DPDK 00:03:20.710 EAL: No shared files mode enabled, IPC will be disabled 00:03:20.710 EAL: Bus pci wants IOVA as 'DC' 00:03:20.710 EAL: Buses did not request a specific IOVA mode. 00:03:20.710 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:20.710 EAL: Selected IOVA mode 'VA' 00:03:20.710 EAL: No free 2048 kB hugepages reported on node 1 00:03:20.710 EAL: Probing VFIO support... 00:03:20.710 EAL: IOMMU type 1 (Type 1) is supported 00:03:20.710 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:20.710 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:20.710 EAL: VFIO support initialized 00:03:20.710 EAL: Ask a virtual area of 0x2e000 bytes 00:03:20.710 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:20.710 EAL: Setting up physically contiguous memory... 00:03:20.710 EAL: Setting maximum number of open files to 524288 00:03:20.710 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:20.710 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:20.710 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:20.710 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.710 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:20.710 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:20.710 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.710 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:20.710 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:20.710 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.710 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:20.710 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:20.710 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.710 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:20.710 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:20.710 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.710 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:20.710 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:20.710 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.710 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:20.710 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:20.710 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.710 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:20.710 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:20.710 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.710 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:20.710 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:20.710 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:20.710 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.710 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:20.710 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:20.710 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.710 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:20.710 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:20.710 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.710 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:20.710 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:20.710 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.710 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:20.710 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:20.710 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.710 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:20.710 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:20.710 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.710 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:20.710 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:20.710 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.710 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:20.710 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:20.710 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.710 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:20.710 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:20.710 EAL: Hugepages will be freed exactly as allocated. 00:03:20.710 EAL: No shared files mode enabled, IPC is disabled 00:03:20.710 EAL: No shared files mode enabled, IPC is disabled 00:03:20.710 EAL: TSC frequency is ~2700000 KHz 00:03:20.710 EAL: Main lcore 0 is ready (tid=7faaf28b8a00;cpuset=[0]) 00:03:20.710 EAL: Trying to obtain current memory policy. 00:03:20.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.710 EAL: Restoring previous memory policy: 0 00:03:20.710 EAL: request: mp_malloc_sync 00:03:20.710 EAL: No shared files mode enabled, IPC is disabled 00:03:20.710 EAL: Heap on socket 0 was expanded by 2MB 00:03:20.710 EAL: No shared files mode enabled, IPC is disabled 00:03:20.710 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:20.710 EAL: Mem event callback 'spdk:(nil)' registered 00:03:20.710 00:03:20.710 00:03:20.710 CUnit - A unit testing framework for C - Version 2.1-3 00:03:20.710 http://cunit.sourceforge.net/ 00:03:20.710 00:03:20.710 00:03:20.710 Suite: components_suite 00:03:20.710 Test: vtophys_malloc_test ...passed 00:03:20.710 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:20.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.710 EAL: Restoring previous memory policy: 4 00:03:20.710 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.710 EAL: request: mp_malloc_sync 00:03:20.710 EAL: No shared files mode enabled, IPC is disabled 00:03:20.710 EAL: Heap on socket 0 was expanded by 4MB 00:03:20.710 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.710 EAL: request: mp_malloc_sync 00:03:20.710 EAL: No shared files mode enabled, IPC is disabled 00:03:20.710 EAL: Heap on socket 0 was shrunk by 4MB 00:03:20.710 EAL: Trying to obtain current memory policy. 00:03:20.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.710 EAL: Restoring previous memory policy: 4 00:03:20.710 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.710 EAL: request: mp_malloc_sync 00:03:20.710 EAL: No shared files mode enabled, IPC is disabled 00:03:20.710 EAL: Heap on socket 0 was expanded by 6MB 00:03:20.710 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.710 EAL: request: mp_malloc_sync 00:03:20.710 EAL: No shared files mode enabled, IPC is disabled 00:03:20.710 EAL: Heap on socket 0 was shrunk by 6MB 00:03:20.710 EAL: Trying to obtain current memory policy. 00:03:20.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.710 EAL: Restoring previous memory policy: 4 00:03:20.710 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.710 EAL: request: mp_malloc_sync 00:03:20.710 EAL: No shared files mode enabled, IPC is disabled 00:03:20.710 EAL: Heap on socket 0 was expanded by 10MB 00:03:20.710 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.710 EAL: request: mp_malloc_sync 00:03:20.710 EAL: No shared files mode enabled, IPC is disabled 00:03:20.710 EAL: Heap on socket 0 was shrunk by 10MB 00:03:20.710 EAL: Trying to obtain current memory policy. 00:03:20.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.711 EAL: Restoring previous memory policy: 4 00:03:20.711 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.711 EAL: request: mp_malloc_sync 00:03:20.711 EAL: No shared files mode enabled, IPC is disabled 00:03:20.711 EAL: Heap on socket 0 was expanded by 18MB 00:03:20.711 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.711 EAL: request: mp_malloc_sync 00:03:20.711 EAL: No shared files mode enabled, IPC is disabled 00:03:20.711 EAL: Heap on socket 0 was shrunk by 18MB 00:03:20.711 EAL: Trying to obtain current memory policy. 00:03:20.711 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.711 EAL: Restoring previous memory policy: 4 00:03:20.711 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.711 EAL: request: mp_malloc_sync 00:03:20.711 EAL: No shared files mode enabled, IPC is disabled 00:03:20.711 EAL: Heap on socket 0 was expanded by 34MB 00:03:20.711 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.711 EAL: request: mp_malloc_sync 00:03:20.711 EAL: No shared files mode enabled, IPC is disabled 00:03:20.711 EAL: Heap on socket 0 was shrunk by 34MB 00:03:20.711 EAL: Trying to obtain current memory policy. 00:03:20.711 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.711 EAL: Restoring previous memory policy: 4 00:03:20.711 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.711 EAL: request: mp_malloc_sync 00:03:20.711 EAL: No shared files mode enabled, IPC is disabled 00:03:20.711 EAL: Heap on socket 0 was expanded by 66MB 00:03:20.711 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.711 EAL: request: mp_malloc_sync 00:03:20.711 EAL: No shared files mode enabled, IPC is disabled 00:03:20.711 EAL: Heap on socket 0 was shrunk by 66MB 00:03:20.711 EAL: Trying to obtain current memory policy. 00:03:20.711 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.711 EAL: Restoring previous memory policy: 4 00:03:20.711 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.711 EAL: request: mp_malloc_sync 00:03:20.711 EAL: No shared files mode enabled, IPC is disabled 00:03:20.711 EAL: Heap on socket 0 was expanded by 130MB 00:03:20.968 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.968 EAL: request: mp_malloc_sync 00:03:20.968 EAL: No shared files mode enabled, IPC is disabled 00:03:20.968 EAL: Heap on socket 0 was shrunk by 130MB 00:03:20.968 EAL: Trying to obtain current memory policy. 00:03:20.968 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.968 EAL: Restoring previous memory policy: 4 00:03:20.968 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.968 EAL: request: mp_malloc_sync 00:03:20.968 EAL: No shared files mode enabled, IPC is disabled 00:03:20.968 EAL: Heap on socket 0 was expanded by 258MB 00:03:20.968 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.968 EAL: request: mp_malloc_sync 00:03:20.968 EAL: No shared files mode enabled, IPC is disabled 00:03:20.968 EAL: Heap on socket 0 was shrunk by 258MB 00:03:20.968 EAL: Trying to obtain current memory policy. 00:03:20.968 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.225 EAL: Restoring previous memory policy: 4 00:03:21.225 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.225 EAL: request: mp_malloc_sync 00:03:21.225 EAL: No shared files mode enabled, IPC is disabled 00:03:21.225 EAL: Heap on socket 0 was expanded by 514MB 00:03:21.225 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.483 EAL: request: mp_malloc_sync 00:03:21.483 EAL: No shared files mode enabled, IPC is disabled 00:03:21.483 EAL: Heap on socket 0 was shrunk by 514MB 00:03:21.483 EAL: Trying to obtain current memory policy. 00:03:21.483 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.740 EAL: Restoring previous memory policy: 4 00:03:21.740 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.740 EAL: request: mp_malloc_sync 00:03:21.740 EAL: No shared files mode enabled, IPC is disabled 00:03:21.740 EAL: Heap on socket 0 was expanded by 1026MB 00:03:21.997 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.255 EAL: request: mp_malloc_sync 00:03:22.255 EAL: No shared files mode enabled, IPC is disabled 00:03:22.255 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:22.255 passed 00:03:22.255 00:03:22.255 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.255 suites 1 1 n/a 0 0 00:03:22.255 tests 2 2 2 0 0 00:03:22.255 asserts 497 497 497 0 n/a 00:03:22.255 00:03:22.255 Elapsed time = 1.410 seconds 00:03:22.256 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.256 EAL: request: mp_malloc_sync 00:03:22.256 EAL: No shared files mode enabled, IPC is disabled 00:03:22.256 EAL: Heap on socket 0 was shrunk by 2MB 00:03:22.256 EAL: No shared files mode enabled, IPC is disabled 00:03:22.256 EAL: No shared files mode enabled, IPC is disabled 00:03:22.256 EAL: No shared files mode enabled, IPC is disabled 00:03:22.256 00:03:22.256 real 0m1.529s 00:03:22.256 user 0m0.870s 00:03:22.256 sys 0m0.623s 00:03:22.256 18:45:59 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:22.256 18:45:59 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:22.256 ************************************ 00:03:22.256 END TEST env_vtophys 00:03:22.256 ************************************ 00:03:22.256 18:45:59 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:22.256 18:45:59 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:22.256 18:45:59 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:22.256 18:45:59 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.256 ************************************ 00:03:22.256 START TEST env_pci 00:03:22.256 ************************************ 00:03:22.256 18:45:59 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:22.256 00:03:22.256 00:03:22.256 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.256 http://cunit.sourceforge.net/ 00:03:22.256 00:03:22.256 00:03:22.256 Suite: pci 00:03:22.256 Test: pci_hook ...[2024-07-24 18:45:59.710050] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3023627 has claimed it 00:03:22.256 EAL: Cannot find device (10000:00:01.0) 00:03:22.256 EAL: Failed to attach device on primary process 00:03:22.256 passed 00:03:22.256 00:03:22.256 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.256 suites 1 1 n/a 0 0 00:03:22.256 tests 1 1 1 0 0 00:03:22.256 asserts 25 25 25 0 n/a 00:03:22.256 00:03:22.256 Elapsed time = 0.022 seconds 00:03:22.256 00:03:22.256 real 0m0.035s 00:03:22.256 user 0m0.010s 00:03:22.256 sys 0m0.025s 00:03:22.256 18:45:59 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:22.256 18:45:59 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:22.256 ************************************ 00:03:22.256 END TEST env_pci 00:03:22.256 ************************************ 00:03:22.256 18:45:59 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:22.256 18:45:59 env -- env/env.sh@15 -- # uname 00:03:22.256 18:45:59 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:22.256 18:45:59 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:22.256 18:45:59 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:22.256 18:45:59 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:03:22.256 18:45:59 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:22.256 18:45:59 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.256 ************************************ 00:03:22.256 START TEST env_dpdk_post_init 00:03:22.256 ************************************ 00:03:22.256 18:45:59 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:22.256 EAL: Detected CPU lcores: 48 00:03:22.256 EAL: Detected NUMA nodes: 2 00:03:22.256 EAL: Detected shared linkage of DPDK 00:03:22.256 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:22.256 EAL: Selected IOVA mode 'VA' 00:03:22.256 EAL: No free 2048 kB hugepages reported on node 1 00:03:22.256 EAL: VFIO support initialized 00:03:22.256 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:22.516 EAL: Using IOMMU type 1 (Type 1) 00:03:22.516 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:22.516 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:22.516 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:22.516 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:22.516 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:22.516 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:22.516 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:22.516 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:23.452 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:03:23.452 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:23.452 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:23.452 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:23.452 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:23.452 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:23.452 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:23.452 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:23.452 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:26.733 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:03:26.733 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:03:26.733 Starting DPDK initialization... 00:03:26.733 Starting SPDK post initialization... 00:03:26.733 SPDK NVMe probe 00:03:26.733 Attaching to 0000:0b:00.0 00:03:26.733 Attached to 0000:0b:00.0 00:03:26.733 Cleaning up... 00:03:26.733 00:03:26.733 real 0m4.354s 00:03:26.733 user 0m3.220s 00:03:26.733 sys 0m0.192s 00:03:26.733 18:46:04 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:26.733 18:46:04 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:26.733 ************************************ 00:03:26.733 END TEST env_dpdk_post_init 00:03:26.733 ************************************ 00:03:26.733 18:46:04 env -- env/env.sh@26 -- # uname 00:03:26.733 18:46:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:26.733 18:46:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:26.733 18:46:04 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:26.733 18:46:04 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:26.733 18:46:04 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.733 ************************************ 00:03:26.733 START TEST env_mem_callbacks 00:03:26.733 ************************************ 00:03:26.734 18:46:04 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:26.734 EAL: Detected CPU lcores: 48 00:03:26.734 EAL: Detected NUMA nodes: 2 00:03:26.734 EAL: Detected shared linkage of DPDK 00:03:26.734 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:26.734 EAL: Selected IOVA mode 'VA' 00:03:26.734 EAL: No free 2048 kB hugepages reported on node 1 00:03:26.734 EAL: VFIO support initialized 00:03:26.734 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:26.734 00:03:26.734 00:03:26.734 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.734 http://cunit.sourceforge.net/ 00:03:26.734 00:03:26.734 00:03:26.734 Suite: memory 00:03:26.734 Test: test ... 00:03:26.734 register 0x200000200000 2097152 00:03:26.734 malloc 3145728 00:03:26.734 register 0x200000400000 4194304 00:03:26.734 buf 0x200000500000 len 3145728 PASSED 00:03:26.734 malloc 64 00:03:26.734 buf 0x2000004fff40 len 64 PASSED 00:03:26.734 malloc 4194304 00:03:26.734 register 0x200000800000 6291456 00:03:26.734 buf 0x200000a00000 len 4194304 PASSED 00:03:26.734 free 0x200000500000 3145728 00:03:26.734 free 0x2000004fff40 64 00:03:26.734 unregister 0x200000400000 4194304 PASSED 00:03:26.734 free 0x200000a00000 4194304 00:03:26.734 unregister 0x200000800000 6291456 PASSED 00:03:26.734 malloc 8388608 00:03:26.734 register 0x200000400000 10485760 00:03:26.734 buf 0x200000600000 len 8388608 PASSED 00:03:26.734 free 0x200000600000 8388608 00:03:26.734 unregister 0x200000400000 10485760 PASSED 00:03:26.734 passed 00:03:26.734 00:03:26.734 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.734 suites 1 1 n/a 0 0 00:03:26.734 tests 1 1 1 0 0 00:03:26.734 asserts 15 15 15 0 n/a 00:03:26.734 00:03:26.734 Elapsed time = 0.005 seconds 00:03:26.734 00:03:26.734 real 0m0.047s 00:03:26.734 user 0m0.013s 00:03:26.734 sys 0m0.034s 00:03:26.734 18:46:04 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:26.734 18:46:04 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:26.734 ************************************ 00:03:26.734 END TEST env_mem_callbacks 00:03:26.734 ************************************ 00:03:26.734 00:03:26.734 real 0m6.408s 00:03:26.734 user 0m4.372s 00:03:26.734 sys 0m1.077s 00:03:26.734 18:46:04 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:26.734 18:46:04 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.734 ************************************ 00:03:26.734 END TEST env 00:03:26.734 ************************************ 00:03:26.734 18:46:04 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:26.734 18:46:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:26.734 18:46:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:26.734 18:46:04 -- common/autotest_common.sh@10 -- # set +x 00:03:26.734 ************************************ 00:03:26.734 START TEST rpc 00:03:26.734 ************************************ 00:03:26.734 18:46:04 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:26.992 * Looking for test storage... 00:03:26.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:26.992 18:46:04 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3024298 00:03:26.992 18:46:04 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:26.992 18:46:04 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:26.992 18:46:04 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3024298 00:03:26.992 18:46:04 rpc -- common/autotest_common.sh@831 -- # '[' -z 3024298 ']' 00:03:26.992 18:46:04 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:26.992 18:46:04 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:26.992 18:46:04 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:26.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:26.992 18:46:04 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:26.992 18:46:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:26.992 [2024-07-24 18:46:04.402215] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:03:26.992 [2024-07-24 18:46:04.402335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3024298 ] 00:03:26.992 EAL: No free 2048 kB hugepages reported on node 1 00:03:26.992 [2024-07-24 18:46:04.462008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:26.992 [2024-07-24 18:46:04.568022] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:26.992 [2024-07-24 18:46:04.568076] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3024298' to capture a snapshot of events at runtime. 00:03:26.992 [2024-07-24 18:46:04.568112] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:26.992 [2024-07-24 18:46:04.568125] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:26.992 [2024-07-24 18:46:04.568136] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3024298 for offline analysis/debug. 00:03:26.992 [2024-07-24 18:46:04.568164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:27.251 18:46:04 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:27.251 18:46:04 rpc -- common/autotest_common.sh@864 -- # return 0 00:03:27.251 18:46:04 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:27.251 18:46:04 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:27.251 18:46:04 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:27.251 18:46:04 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:27.251 18:46:04 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:27.251 18:46:04 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:27.251 18:46:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.509 ************************************ 00:03:27.509 START TEST rpc_integrity 00:03:27.509 ************************************ 00:03:27.509 18:46:04 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:27.509 18:46:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:27.509 18:46:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:27.509 18:46:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.509 18:46:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:27.509 18:46:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:27.509 18:46:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:27.509 18:46:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:27.509 18:46:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:27.509 18:46:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:27.509 18:46:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.509 18:46:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:27.509 18:46:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:27.509 18:46:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:27.509 18:46:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:27.509 18:46:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.509 18:46:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:27.509 18:46:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:27.509 { 00:03:27.509 "name": "Malloc0", 00:03:27.509 "aliases": [ 00:03:27.509 "dded8431-f4a1-467b-b999-7475743e136e" 00:03:27.509 ], 00:03:27.509 "product_name": "Malloc disk", 00:03:27.509 "block_size": 512, 00:03:27.509 "num_blocks": 16384, 00:03:27.509 "uuid": "dded8431-f4a1-467b-b999-7475743e136e", 00:03:27.509 "assigned_rate_limits": { 00:03:27.509 "rw_ios_per_sec": 0, 00:03:27.509 "rw_mbytes_per_sec": 0, 00:03:27.509 "r_mbytes_per_sec": 0, 00:03:27.509 "w_mbytes_per_sec": 0 00:03:27.509 }, 00:03:27.509 "claimed": false, 00:03:27.509 "zoned": false, 00:03:27.509 "supported_io_types": { 00:03:27.509 "read": true, 00:03:27.509 "write": true, 00:03:27.509 "unmap": true, 00:03:27.509 "flush": true, 00:03:27.509 "reset": true, 00:03:27.509 "nvme_admin": false, 00:03:27.509 "nvme_io": false, 00:03:27.509 "nvme_io_md": false, 00:03:27.509 "write_zeroes": true, 00:03:27.509 "zcopy": true, 00:03:27.509 "get_zone_info": false, 00:03:27.509 "zone_management": false, 00:03:27.509 "zone_append": false, 00:03:27.509 "compare": false, 00:03:27.509 "compare_and_write": false, 00:03:27.509 "abort": true, 00:03:27.509 "seek_hole": false, 00:03:27.509 "seek_data": false, 00:03:27.509 "copy": true, 00:03:27.509 "nvme_iov_md": false 00:03:27.509 }, 00:03:27.509 "memory_domains": [ 00:03:27.510 { 00:03:27.510 "dma_device_id": "system", 00:03:27.510 "dma_device_type": 1 00:03:27.510 }, 00:03:27.510 { 00:03:27.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:27.510 "dma_device_type": 2 00:03:27.510 } 00:03:27.510 ], 00:03:27.510 "driver_specific": {} 00:03:27.510 } 00:03:27.510 ]' 00:03:27.510 18:46:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:27.510 18:46:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:27.510 18:46:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:27.510 18:46:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:27.510 18:46:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.510 [2024-07-24 18:46:04.981055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:27.510 [2024-07-24 18:46:04.981117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:27.510 [2024-07-24 18:46:04.981170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14bfd50 00:03:27.510 [2024-07-24 18:46:04.981185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:27.510 [2024-07-24 18:46:04.982720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:27.510 [2024-07-24 18:46:04.982749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:27.510 Passthru0 00:03:27.510 18:46:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:27.510 18:46:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:27.510 18:46:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:27.510 18:46:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.510 18:46:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:27.510 18:46:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:27.510 { 00:03:27.510 "name": "Malloc0", 00:03:27.510 "aliases": [ 00:03:27.510 "dded8431-f4a1-467b-b999-7475743e136e" 00:03:27.510 ], 00:03:27.510 "product_name": "Malloc disk", 00:03:27.510 "block_size": 512, 00:03:27.510 "num_blocks": 16384, 00:03:27.510 "uuid": "dded8431-f4a1-467b-b999-7475743e136e", 00:03:27.510 "assigned_rate_limits": { 00:03:27.510 "rw_ios_per_sec": 0, 00:03:27.510 "rw_mbytes_per_sec": 0, 00:03:27.510 "r_mbytes_per_sec": 0, 00:03:27.510 "w_mbytes_per_sec": 0 00:03:27.510 }, 00:03:27.510 "claimed": true, 00:03:27.510 "claim_type": "exclusive_write", 00:03:27.510 "zoned": false, 00:03:27.510 "supported_io_types": { 00:03:27.510 "read": true, 00:03:27.510 "write": true, 00:03:27.510 "unmap": true, 00:03:27.510 "flush": true, 00:03:27.510 "reset": true, 00:03:27.510 "nvme_admin": false, 00:03:27.510 "nvme_io": false, 00:03:27.510 "nvme_io_md": false, 00:03:27.510 "write_zeroes": true, 00:03:27.510 "zcopy": true, 00:03:27.510 "get_zone_info": false, 00:03:27.510 "zone_management": false, 00:03:27.510 "zone_append": false, 00:03:27.510 "compare": false, 00:03:27.510 "compare_and_write": false, 00:03:27.510 "abort": true, 00:03:27.510 "seek_hole": false, 00:03:27.510 "seek_data": false, 00:03:27.510 "copy": true, 00:03:27.510 "nvme_iov_md": false 00:03:27.510 }, 00:03:27.510 "memory_domains": [ 00:03:27.510 { 00:03:27.510 "dma_device_id": "system", 00:03:27.510 "dma_device_type": 1 00:03:27.510 }, 00:03:27.510 { 00:03:27.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:27.510 "dma_device_type": 2 00:03:27.510 } 00:03:27.510 ], 00:03:27.510 "driver_specific": {} 00:03:27.510 }, 00:03:27.510 { 00:03:27.510 "name": "Passthru0", 00:03:27.510 "aliases": [ 00:03:27.510 "cb0e7f23-15a0-5903-912d-d53067871208" 00:03:27.510 ], 00:03:27.510 "product_name": "passthru", 00:03:27.510 "block_size": 512, 00:03:27.510 "num_blocks": 16384, 00:03:27.510 "uuid": "cb0e7f23-15a0-5903-912d-d53067871208", 00:03:27.510 "assigned_rate_limits": { 00:03:27.510 "rw_ios_per_sec": 0, 00:03:27.510 "rw_mbytes_per_sec": 0, 00:03:27.510 "r_mbytes_per_sec": 0, 00:03:27.510 "w_mbytes_per_sec": 0 00:03:27.510 }, 00:03:27.510 "claimed": false, 00:03:27.510 "zoned": false, 00:03:27.510 "supported_io_types": { 00:03:27.510 "read": true, 00:03:27.510 "write": true, 00:03:27.510 "unmap": true, 00:03:27.510 "flush": true, 00:03:27.510 "reset": true, 00:03:27.510 "nvme_admin": false, 00:03:27.510 "nvme_io": false, 00:03:27.510 "nvme_io_md": false, 00:03:27.510 "write_zeroes": true, 00:03:27.510 "zcopy": true, 00:03:27.510 "get_zone_info": false, 00:03:27.510 "zone_management": false, 00:03:27.510 "zone_append": false, 00:03:27.510 "compare": false, 00:03:27.510 "compare_and_write": false, 00:03:27.510 "abort": true, 00:03:27.510 "seek_hole": false, 00:03:27.510 "seek_data": false, 00:03:27.510 "copy": true, 00:03:27.510 "nvme_iov_md": false 00:03:27.510 }, 00:03:27.510 "memory_domains": [ 00:03:27.510 { 00:03:27.510 "dma_device_id": "system", 00:03:27.510 "dma_device_type": 1 00:03:27.510 }, 00:03:27.510 { 00:03:27.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:27.510 "dma_device_type": 2 00:03:27.510 } 00:03:27.510 ], 00:03:27.510 "driver_specific": { 00:03:27.510 "passthru": { 00:03:27.510 "name": "Passthru0", 00:03:27.510 "base_bdev_name": "Malloc0" 00:03:27.510 } 00:03:27.510 } 00:03:27.510 } 00:03:27.510 ]' 00:03:27.510 18:46:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:27.510 18:46:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:27.510 18:46:05 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:27.510 18:46:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:27.510 18:46:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.510 18:46:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:27.510 18:46:05 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:27.510 18:46:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:27.510 18:46:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.510 18:46:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:27.510 18:46:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:27.510 18:46:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:27.510 18:46:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.510 18:46:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:27.510 18:46:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:27.510 18:46:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:27.510 18:46:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:27.510 00:03:27.510 real 0m0.229s 00:03:27.510 user 0m0.144s 00:03:27.510 sys 0m0.029s 00:03:27.510 18:46:05 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:27.510 18:46:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.510 ************************************ 00:03:27.510 END TEST rpc_integrity 00:03:27.510 ************************************ 00:03:27.768 18:46:05 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:27.768 18:46:05 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:27.768 18:46:05 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:27.768 18:46:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.768 ************************************ 00:03:27.768 START TEST rpc_plugins 00:03:27.768 ************************************ 00:03:27.768 18:46:05 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:03:27.768 18:46:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:27.768 18:46:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:27.768 18:46:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:27.768 18:46:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:27.768 18:46:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:27.768 18:46:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:27.768 18:46:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:27.768 18:46:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:27.768 18:46:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:27.768 18:46:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:27.768 { 00:03:27.768 "name": "Malloc1", 00:03:27.768 "aliases": [ 00:03:27.768 "b4c88775-e79b-439c-bcd2-5c1f201f4cde" 00:03:27.768 ], 00:03:27.768 "product_name": "Malloc disk", 00:03:27.768 "block_size": 4096, 00:03:27.768 "num_blocks": 256, 00:03:27.768 "uuid": "b4c88775-e79b-439c-bcd2-5c1f201f4cde", 00:03:27.768 "assigned_rate_limits": { 00:03:27.768 "rw_ios_per_sec": 0, 00:03:27.768 "rw_mbytes_per_sec": 0, 00:03:27.769 "r_mbytes_per_sec": 0, 00:03:27.769 "w_mbytes_per_sec": 0 00:03:27.769 }, 00:03:27.769 "claimed": false, 00:03:27.769 "zoned": false, 00:03:27.769 "supported_io_types": { 00:03:27.769 "read": true, 00:03:27.769 "write": true, 00:03:27.769 "unmap": true, 00:03:27.769 "flush": true, 00:03:27.769 "reset": true, 00:03:27.769 "nvme_admin": false, 00:03:27.769 "nvme_io": false, 00:03:27.769 "nvme_io_md": false, 00:03:27.769 "write_zeroes": true, 00:03:27.769 "zcopy": true, 00:03:27.769 "get_zone_info": false, 00:03:27.769 "zone_management": false, 00:03:27.769 "zone_append": false, 00:03:27.769 "compare": false, 00:03:27.769 "compare_and_write": false, 00:03:27.769 "abort": true, 00:03:27.769 "seek_hole": false, 00:03:27.769 "seek_data": false, 00:03:27.769 "copy": true, 00:03:27.769 "nvme_iov_md": false 00:03:27.769 }, 00:03:27.769 "memory_domains": [ 00:03:27.769 { 00:03:27.769 "dma_device_id": "system", 00:03:27.769 "dma_device_type": 1 00:03:27.769 }, 00:03:27.769 { 00:03:27.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:27.769 "dma_device_type": 2 00:03:27.769 } 00:03:27.769 ], 00:03:27.769 "driver_specific": {} 00:03:27.769 } 00:03:27.769 ]' 00:03:27.769 18:46:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:27.769 18:46:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:27.769 18:46:05 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:27.769 18:46:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:27.769 18:46:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:27.769 18:46:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:27.769 18:46:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:27.769 18:46:05 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:27.769 18:46:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:27.769 18:46:05 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:27.769 18:46:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:27.769 18:46:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:27.769 18:46:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:27.769 00:03:27.769 real 0m0.112s 00:03:27.769 user 0m0.073s 00:03:27.769 sys 0m0.012s 00:03:27.769 18:46:05 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:27.769 18:46:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:27.769 ************************************ 00:03:27.769 END TEST rpc_plugins 00:03:27.769 ************************************ 00:03:27.769 18:46:05 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:27.769 18:46:05 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:27.769 18:46:05 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:27.769 18:46:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.769 ************************************ 00:03:27.769 START TEST rpc_trace_cmd_test 00:03:27.769 ************************************ 00:03:27.769 18:46:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:03:27.769 18:46:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:27.769 18:46:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:27.769 18:46:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:27.769 18:46:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:27.769 18:46:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:27.769 18:46:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:27.769 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3024298", 00:03:27.769 "tpoint_group_mask": "0x8", 00:03:27.769 "iscsi_conn": { 00:03:27.769 "mask": "0x2", 00:03:27.769 "tpoint_mask": "0x0" 00:03:27.769 }, 00:03:27.769 "scsi": { 00:03:27.769 "mask": "0x4", 00:03:27.769 "tpoint_mask": "0x0" 00:03:27.769 }, 00:03:27.769 "bdev": { 00:03:27.769 "mask": "0x8", 00:03:27.769 "tpoint_mask": "0xffffffffffffffff" 00:03:27.769 }, 00:03:27.769 "nvmf_rdma": { 00:03:27.769 "mask": "0x10", 00:03:27.769 "tpoint_mask": "0x0" 00:03:27.769 }, 00:03:27.769 "nvmf_tcp": { 00:03:27.769 "mask": "0x20", 00:03:27.769 "tpoint_mask": "0x0" 00:03:27.769 }, 00:03:27.769 "ftl": { 00:03:27.769 "mask": "0x40", 00:03:27.769 "tpoint_mask": "0x0" 00:03:27.769 }, 00:03:27.769 "blobfs": { 00:03:27.769 "mask": "0x80", 00:03:27.769 "tpoint_mask": "0x0" 00:03:27.769 }, 00:03:27.769 "dsa": { 00:03:27.769 "mask": "0x200", 00:03:27.769 "tpoint_mask": "0x0" 00:03:27.769 }, 00:03:27.769 "thread": { 00:03:27.769 "mask": "0x400", 00:03:27.769 "tpoint_mask": "0x0" 00:03:27.769 }, 00:03:27.769 "nvme_pcie": { 00:03:27.769 "mask": "0x800", 00:03:27.769 "tpoint_mask": "0x0" 00:03:27.769 }, 00:03:27.769 "iaa": { 00:03:27.769 "mask": "0x1000", 00:03:27.769 "tpoint_mask": "0x0" 00:03:27.769 }, 00:03:27.769 "nvme_tcp": { 00:03:27.769 "mask": "0x2000", 00:03:27.769 "tpoint_mask": "0x0" 00:03:27.769 }, 00:03:27.769 "bdev_nvme": { 00:03:27.769 "mask": "0x4000", 00:03:27.769 "tpoint_mask": "0x0" 00:03:27.769 }, 00:03:27.769 "sock": { 00:03:27.769 "mask": "0x8000", 00:03:27.769 "tpoint_mask": "0x0" 00:03:27.769 } 00:03:27.769 }' 00:03:27.769 18:46:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:27.769 18:46:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:27.769 18:46:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:28.027 18:46:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:28.027 18:46:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:28.027 18:46:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:28.027 18:46:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:28.027 18:46:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:28.027 18:46:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:28.027 18:46:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:28.027 00:03:28.027 real 0m0.197s 00:03:28.027 user 0m0.172s 00:03:28.027 sys 0m0.018s 00:03:28.028 18:46:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:28.028 18:46:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:28.028 ************************************ 00:03:28.028 END TEST rpc_trace_cmd_test 00:03:28.028 ************************************ 00:03:28.028 18:46:05 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:28.028 18:46:05 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:28.028 18:46:05 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:28.028 18:46:05 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:28.028 18:46:05 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:28.028 18:46:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.028 ************************************ 00:03:28.028 START TEST rpc_daemon_integrity 00:03:28.028 ************************************ 00:03:28.028 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:28.028 18:46:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:28.028 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:28.028 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.028 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:28.028 18:46:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:28.028 18:46:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:28.028 18:46:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:28.028 18:46:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:28.028 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:28.028 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.028 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:28.028 18:46:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:28.028 18:46:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:28.028 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:28.028 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.028 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:28.028 18:46:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:28.028 { 00:03:28.028 "name": "Malloc2", 00:03:28.028 "aliases": [ 00:03:28.028 "02f9b581-f501-4c41-b52f-e664cbdc7924" 00:03:28.028 ], 00:03:28.028 "product_name": "Malloc disk", 00:03:28.028 "block_size": 512, 00:03:28.028 "num_blocks": 16384, 00:03:28.028 "uuid": "02f9b581-f501-4c41-b52f-e664cbdc7924", 00:03:28.028 "assigned_rate_limits": { 00:03:28.028 "rw_ios_per_sec": 0, 00:03:28.028 "rw_mbytes_per_sec": 0, 00:03:28.028 "r_mbytes_per_sec": 0, 00:03:28.028 "w_mbytes_per_sec": 0 00:03:28.028 }, 00:03:28.028 "claimed": false, 00:03:28.028 "zoned": false, 00:03:28.028 "supported_io_types": { 00:03:28.028 "read": true, 00:03:28.028 "write": true, 00:03:28.028 "unmap": true, 00:03:28.028 "flush": true, 00:03:28.028 "reset": true, 00:03:28.028 "nvme_admin": false, 00:03:28.028 "nvme_io": false, 00:03:28.028 "nvme_io_md": false, 00:03:28.028 "write_zeroes": true, 00:03:28.028 "zcopy": true, 00:03:28.028 "get_zone_info": false, 00:03:28.028 "zone_management": false, 00:03:28.028 "zone_append": false, 00:03:28.028 "compare": false, 00:03:28.028 "compare_and_write": false, 00:03:28.028 "abort": true, 00:03:28.028 "seek_hole": false, 00:03:28.028 "seek_data": false, 00:03:28.028 "copy": true, 00:03:28.028 "nvme_iov_md": false 00:03:28.028 }, 00:03:28.028 "memory_domains": [ 00:03:28.028 { 00:03:28.028 "dma_device_id": "system", 00:03:28.028 "dma_device_type": 1 00:03:28.028 }, 00:03:28.028 { 00:03:28.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.028 "dma_device_type": 2 00:03:28.028 } 00:03:28.028 ], 00:03:28.028 "driver_specific": {} 00:03:28.028 } 00:03:28.028 ]' 00:03:28.028 18:46:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:28.286 18:46:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:28.286 18:46:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:28.286 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:28.286 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.286 [2024-07-24 18:46:05.651725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:28.286 [2024-07-24 18:46:05.651778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:28.286 [2024-07-24 18:46:05.651809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14bf980 00:03:28.286 [2024-07-24 18:46:05.651825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:28.286 [2024-07-24 18:46:05.653208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:28.286 [2024-07-24 18:46:05.653235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:28.286 Passthru0 00:03:28.286 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:28.286 18:46:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:28.286 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:28.286 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.286 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:28.286 18:46:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:28.286 { 00:03:28.286 "name": "Malloc2", 00:03:28.286 "aliases": [ 00:03:28.286 "02f9b581-f501-4c41-b52f-e664cbdc7924" 00:03:28.286 ], 00:03:28.286 "product_name": "Malloc disk", 00:03:28.286 "block_size": 512, 00:03:28.286 "num_blocks": 16384, 00:03:28.286 "uuid": "02f9b581-f501-4c41-b52f-e664cbdc7924", 00:03:28.286 "assigned_rate_limits": { 00:03:28.286 "rw_ios_per_sec": 0, 00:03:28.286 "rw_mbytes_per_sec": 0, 00:03:28.286 "r_mbytes_per_sec": 0, 00:03:28.286 "w_mbytes_per_sec": 0 00:03:28.286 }, 00:03:28.286 "claimed": true, 00:03:28.286 "claim_type": "exclusive_write", 00:03:28.286 "zoned": false, 00:03:28.286 "supported_io_types": { 00:03:28.286 "read": true, 00:03:28.286 "write": true, 00:03:28.286 "unmap": true, 00:03:28.286 "flush": true, 00:03:28.286 "reset": true, 00:03:28.286 "nvme_admin": false, 00:03:28.286 "nvme_io": false, 00:03:28.286 "nvme_io_md": false, 00:03:28.286 "write_zeroes": true, 00:03:28.286 "zcopy": true, 00:03:28.286 "get_zone_info": false, 00:03:28.286 "zone_management": false, 00:03:28.286 "zone_append": false, 00:03:28.286 "compare": false, 00:03:28.286 "compare_and_write": false, 00:03:28.286 "abort": true, 00:03:28.286 "seek_hole": false, 00:03:28.286 "seek_data": false, 00:03:28.286 "copy": true, 00:03:28.286 "nvme_iov_md": false 00:03:28.286 }, 00:03:28.286 "memory_domains": [ 00:03:28.286 { 00:03:28.286 "dma_device_id": "system", 00:03:28.286 "dma_device_type": 1 00:03:28.286 }, 00:03:28.286 { 00:03:28.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.286 "dma_device_type": 2 00:03:28.286 } 00:03:28.286 ], 00:03:28.286 "driver_specific": {} 00:03:28.286 }, 00:03:28.286 { 00:03:28.286 "name": "Passthru0", 00:03:28.286 "aliases": [ 00:03:28.286 "bbe326ba-f916-5597-907f-428d1d75f194" 00:03:28.286 ], 00:03:28.286 "product_name": "passthru", 00:03:28.286 "block_size": 512, 00:03:28.286 "num_blocks": 16384, 00:03:28.286 "uuid": "bbe326ba-f916-5597-907f-428d1d75f194", 00:03:28.286 "assigned_rate_limits": { 00:03:28.286 "rw_ios_per_sec": 0, 00:03:28.286 "rw_mbytes_per_sec": 0, 00:03:28.286 "r_mbytes_per_sec": 0, 00:03:28.286 "w_mbytes_per_sec": 0 00:03:28.286 }, 00:03:28.286 "claimed": false, 00:03:28.286 "zoned": false, 00:03:28.286 "supported_io_types": { 00:03:28.286 "read": true, 00:03:28.286 "write": true, 00:03:28.286 "unmap": true, 00:03:28.286 "flush": true, 00:03:28.286 "reset": true, 00:03:28.286 "nvme_admin": false, 00:03:28.286 "nvme_io": false, 00:03:28.286 "nvme_io_md": false, 00:03:28.286 "write_zeroes": true, 00:03:28.286 "zcopy": true, 00:03:28.286 "get_zone_info": false, 00:03:28.286 "zone_management": false, 00:03:28.286 "zone_append": false, 00:03:28.286 "compare": false, 00:03:28.286 "compare_and_write": false, 00:03:28.286 "abort": true, 00:03:28.286 "seek_hole": false, 00:03:28.286 "seek_data": false, 00:03:28.286 "copy": true, 00:03:28.286 "nvme_iov_md": false 00:03:28.286 }, 00:03:28.286 "memory_domains": [ 00:03:28.286 { 00:03:28.286 "dma_device_id": "system", 00:03:28.286 "dma_device_type": 1 00:03:28.286 }, 00:03:28.286 { 00:03:28.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.286 "dma_device_type": 2 00:03:28.286 } 00:03:28.286 ], 00:03:28.286 "driver_specific": { 00:03:28.286 "passthru": { 00:03:28.286 "name": "Passthru0", 00:03:28.287 "base_bdev_name": "Malloc2" 00:03:28.287 } 00:03:28.287 } 00:03:28.287 } 00:03:28.287 ]' 00:03:28.287 18:46:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:28.287 18:46:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:28.287 18:46:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:28.287 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:28.287 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.287 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:28.287 18:46:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:28.287 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:28.287 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.287 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:28.287 18:46:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:28.287 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:28.287 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.287 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:28.287 18:46:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:28.287 18:46:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:28.287 18:46:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:28.287 00:03:28.287 real 0m0.232s 00:03:28.287 user 0m0.154s 00:03:28.287 sys 0m0.022s 00:03:28.287 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:28.287 18:46:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.287 ************************************ 00:03:28.287 END TEST rpc_daemon_integrity 00:03:28.287 ************************************ 00:03:28.287 18:46:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:28.287 18:46:05 rpc -- rpc/rpc.sh@84 -- # killprocess 3024298 00:03:28.287 18:46:05 rpc -- common/autotest_common.sh@950 -- # '[' -z 3024298 ']' 00:03:28.287 18:46:05 rpc -- common/autotest_common.sh@954 -- # kill -0 3024298 00:03:28.287 18:46:05 rpc -- common/autotest_common.sh@955 -- # uname 00:03:28.287 18:46:05 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:28.287 18:46:05 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3024298 00:03:28.287 18:46:05 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:28.287 18:46:05 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:28.287 18:46:05 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3024298' 00:03:28.287 killing process with pid 3024298 00:03:28.287 18:46:05 rpc -- common/autotest_common.sh@969 -- # kill 3024298 00:03:28.287 18:46:05 rpc -- common/autotest_common.sh@974 -- # wait 3024298 00:03:28.854 00:03:28.854 real 0m2.004s 00:03:28.854 user 0m2.438s 00:03:28.854 sys 0m0.640s 00:03:28.854 18:46:06 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:28.854 18:46:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.854 ************************************ 00:03:28.854 END TEST rpc 00:03:28.854 ************************************ 00:03:28.854 18:46:06 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:28.854 18:46:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:28.854 18:46:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:28.854 18:46:06 -- common/autotest_common.sh@10 -- # set +x 00:03:28.854 ************************************ 00:03:28.854 START TEST skip_rpc 00:03:28.854 ************************************ 00:03:28.854 18:46:06 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:28.854 * Looking for test storage... 00:03:28.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:28.854 18:46:06 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:28.854 18:46:06 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:28.854 18:46:06 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:28.854 18:46:06 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:28.854 18:46:06 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:28.854 18:46:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.854 ************************************ 00:03:28.854 START TEST skip_rpc 00:03:28.854 ************************************ 00:03:28.854 18:46:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:03:28.854 18:46:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3024722 00:03:28.854 18:46:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:28.854 18:46:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:28.854 18:46:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:29.112 [2024-07-24 18:46:06.480692] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:03:29.112 [2024-07-24 18:46:06.480768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3024722 ] 00:03:29.112 EAL: No free 2048 kB hugepages reported on node 1 00:03:29.112 [2024-07-24 18:46:06.544476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:29.112 [2024-07-24 18:46:06.656159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3024722 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 3024722 ']' 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 3024722 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3024722 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3024722' 00:03:34.375 killing process with pid 3024722 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 3024722 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 3024722 00:03:34.375 00:03:34.375 real 0m5.502s 00:03:34.375 user 0m5.181s 00:03:34.375 sys 0m0.317s 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:34.375 18:46:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.375 ************************************ 00:03:34.375 END TEST skip_rpc 00:03:34.375 ************************************ 00:03:34.375 18:46:11 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:34.375 18:46:11 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:34.375 18:46:11 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:34.375 18:46:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.634 ************************************ 00:03:34.634 START TEST skip_rpc_with_json 00:03:34.634 ************************************ 00:03:34.634 18:46:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:03:34.634 18:46:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:34.634 18:46:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3025408 00:03:34.634 18:46:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:34.634 18:46:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:34.634 18:46:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3025408 00:03:34.634 18:46:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 3025408 ']' 00:03:34.634 18:46:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:34.634 18:46:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:34.634 18:46:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:34.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:34.634 18:46:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:34.634 18:46:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:34.634 [2024-07-24 18:46:12.034111] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:03:34.634 [2024-07-24 18:46:12.034227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3025408 ] 00:03:34.634 EAL: No free 2048 kB hugepages reported on node 1 00:03:34.634 [2024-07-24 18:46:12.096062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:34.634 [2024-07-24 18:46:12.212658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:35.569 18:46:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:35.569 18:46:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:03:35.569 18:46:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:35.569 18:46:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:35.569 18:46:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:35.569 [2024-07-24 18:46:12.966131] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:35.569 request: 00:03:35.569 { 00:03:35.569 "trtype": "tcp", 00:03:35.569 "method": "nvmf_get_transports", 00:03:35.569 "req_id": 1 00:03:35.569 } 00:03:35.569 Got JSON-RPC error response 00:03:35.569 response: 00:03:35.569 { 00:03:35.569 "code": -19, 00:03:35.569 "message": "No such device" 00:03:35.569 } 00:03:35.569 18:46:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:35.569 18:46:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:35.569 18:46:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:35.569 18:46:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:35.569 [2024-07-24 18:46:12.974256] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:35.569 18:46:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:35.569 18:46:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:35.569 18:46:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:35.569 18:46:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:35.569 18:46:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:35.569 18:46:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:35.569 { 00:03:35.569 "subsystems": [ 00:03:35.569 { 00:03:35.569 "subsystem": "vfio_user_target", 00:03:35.569 "config": null 00:03:35.569 }, 00:03:35.569 { 00:03:35.569 "subsystem": "keyring", 00:03:35.569 "config": [] 00:03:35.569 }, 00:03:35.569 { 00:03:35.569 "subsystem": "iobuf", 00:03:35.569 "config": [ 00:03:35.569 { 00:03:35.569 "method": "iobuf_set_options", 00:03:35.569 "params": { 00:03:35.569 "small_pool_count": 8192, 00:03:35.569 "large_pool_count": 1024, 00:03:35.569 "small_bufsize": 8192, 00:03:35.569 "large_bufsize": 135168 00:03:35.569 } 00:03:35.569 } 00:03:35.569 ] 00:03:35.569 }, 00:03:35.569 { 00:03:35.569 "subsystem": "sock", 00:03:35.569 "config": [ 00:03:35.569 { 00:03:35.569 "method": "sock_set_default_impl", 00:03:35.569 "params": { 00:03:35.569 "impl_name": "posix" 00:03:35.569 } 00:03:35.569 }, 00:03:35.569 { 00:03:35.569 "method": "sock_impl_set_options", 00:03:35.569 "params": { 00:03:35.569 "impl_name": "ssl", 00:03:35.569 "recv_buf_size": 4096, 00:03:35.569 "send_buf_size": 4096, 00:03:35.569 "enable_recv_pipe": true, 00:03:35.569 "enable_quickack": false, 00:03:35.569 "enable_placement_id": 0, 00:03:35.569 "enable_zerocopy_send_server": true, 00:03:35.569 "enable_zerocopy_send_client": false, 00:03:35.569 "zerocopy_threshold": 0, 00:03:35.569 "tls_version": 0, 00:03:35.569 "enable_ktls": false 00:03:35.569 } 00:03:35.569 }, 00:03:35.569 { 00:03:35.569 "method": "sock_impl_set_options", 00:03:35.569 "params": { 00:03:35.569 "impl_name": "posix", 00:03:35.569 "recv_buf_size": 2097152, 00:03:35.569 "send_buf_size": 2097152, 00:03:35.569 "enable_recv_pipe": true, 00:03:35.569 "enable_quickack": false, 00:03:35.569 "enable_placement_id": 0, 00:03:35.569 "enable_zerocopy_send_server": true, 00:03:35.569 "enable_zerocopy_send_client": false, 00:03:35.569 "zerocopy_threshold": 0, 00:03:35.569 "tls_version": 0, 00:03:35.569 "enable_ktls": false 00:03:35.569 } 00:03:35.569 } 00:03:35.569 ] 00:03:35.569 }, 00:03:35.569 { 00:03:35.569 "subsystem": "vmd", 00:03:35.569 "config": [] 00:03:35.569 }, 00:03:35.569 { 00:03:35.569 "subsystem": "accel", 00:03:35.569 "config": [ 00:03:35.569 { 00:03:35.569 "method": "accel_set_options", 00:03:35.569 "params": { 00:03:35.569 "small_cache_size": 128, 00:03:35.569 "large_cache_size": 16, 00:03:35.569 "task_count": 2048, 00:03:35.569 "sequence_count": 2048, 00:03:35.569 "buf_count": 2048 00:03:35.569 } 00:03:35.569 } 00:03:35.569 ] 00:03:35.569 }, 00:03:35.569 { 00:03:35.569 "subsystem": "bdev", 00:03:35.569 "config": [ 00:03:35.569 { 00:03:35.569 "method": "bdev_set_options", 00:03:35.569 "params": { 00:03:35.569 "bdev_io_pool_size": 65535, 00:03:35.569 "bdev_io_cache_size": 256, 00:03:35.569 "bdev_auto_examine": true, 00:03:35.569 "iobuf_small_cache_size": 128, 00:03:35.569 "iobuf_large_cache_size": 16 00:03:35.569 } 00:03:35.569 }, 00:03:35.569 { 00:03:35.569 "method": "bdev_raid_set_options", 00:03:35.569 "params": { 00:03:35.569 "process_window_size_kb": 1024, 00:03:35.569 "process_max_bandwidth_mb_sec": 0 00:03:35.569 } 00:03:35.569 }, 00:03:35.569 { 00:03:35.569 "method": "bdev_iscsi_set_options", 00:03:35.569 "params": { 00:03:35.569 "timeout_sec": 30 00:03:35.569 } 00:03:35.569 }, 00:03:35.569 { 00:03:35.569 "method": "bdev_nvme_set_options", 00:03:35.569 "params": { 00:03:35.569 "action_on_timeout": "none", 00:03:35.569 "timeout_us": 0, 00:03:35.569 "timeout_admin_us": 0, 00:03:35.569 "keep_alive_timeout_ms": 10000, 00:03:35.569 "arbitration_burst": 0, 00:03:35.569 "low_priority_weight": 0, 00:03:35.569 "medium_priority_weight": 0, 00:03:35.569 "high_priority_weight": 0, 00:03:35.569 "nvme_adminq_poll_period_us": 10000, 00:03:35.569 "nvme_ioq_poll_period_us": 0, 00:03:35.569 "io_queue_requests": 0, 00:03:35.569 "delay_cmd_submit": true, 00:03:35.569 "transport_retry_count": 4, 00:03:35.569 "bdev_retry_count": 3, 00:03:35.569 "transport_ack_timeout": 0, 00:03:35.569 "ctrlr_loss_timeout_sec": 0, 00:03:35.569 "reconnect_delay_sec": 0, 00:03:35.569 "fast_io_fail_timeout_sec": 0, 00:03:35.569 "disable_auto_failback": false, 00:03:35.569 "generate_uuids": false, 00:03:35.569 "transport_tos": 0, 00:03:35.569 "nvme_error_stat": false, 00:03:35.569 "rdma_srq_size": 0, 00:03:35.569 "io_path_stat": false, 00:03:35.569 "allow_accel_sequence": false, 00:03:35.569 "rdma_max_cq_size": 0, 00:03:35.569 "rdma_cm_event_timeout_ms": 0, 00:03:35.569 "dhchap_digests": [ 00:03:35.569 "sha256", 00:03:35.569 "sha384", 00:03:35.569 "sha512" 00:03:35.569 ], 00:03:35.569 "dhchap_dhgroups": [ 00:03:35.569 "null", 00:03:35.569 "ffdhe2048", 00:03:35.569 "ffdhe3072", 00:03:35.569 "ffdhe4096", 00:03:35.569 "ffdhe6144", 00:03:35.569 "ffdhe8192" 00:03:35.569 ] 00:03:35.569 } 00:03:35.569 }, 00:03:35.569 { 00:03:35.569 "method": "bdev_nvme_set_hotplug", 00:03:35.569 "params": { 00:03:35.569 "period_us": 100000, 00:03:35.569 "enable": false 00:03:35.569 } 00:03:35.569 }, 00:03:35.569 { 00:03:35.569 "method": "bdev_wait_for_examine" 00:03:35.569 } 00:03:35.569 ] 00:03:35.569 }, 00:03:35.569 { 00:03:35.569 "subsystem": "scsi", 00:03:35.569 "config": null 00:03:35.569 }, 00:03:35.569 { 00:03:35.569 "subsystem": "scheduler", 00:03:35.569 "config": [ 00:03:35.569 { 00:03:35.569 "method": "framework_set_scheduler", 00:03:35.569 "params": { 00:03:35.569 "name": "static" 00:03:35.569 } 00:03:35.569 } 00:03:35.569 ] 00:03:35.569 }, 00:03:35.569 { 00:03:35.569 "subsystem": "vhost_scsi", 00:03:35.569 "config": [] 00:03:35.569 }, 00:03:35.569 { 00:03:35.569 "subsystem": "vhost_blk", 00:03:35.569 "config": [] 00:03:35.569 }, 00:03:35.569 { 00:03:35.569 "subsystem": "ublk", 00:03:35.569 "config": [] 00:03:35.569 }, 00:03:35.569 { 00:03:35.569 "subsystem": "nbd", 00:03:35.569 "config": [] 00:03:35.569 }, 00:03:35.569 { 00:03:35.569 "subsystem": "nvmf", 00:03:35.569 "config": [ 00:03:35.569 { 00:03:35.569 "method": "nvmf_set_config", 00:03:35.569 "params": { 00:03:35.569 "discovery_filter": "match_any", 00:03:35.569 "admin_cmd_passthru": { 00:03:35.569 "identify_ctrlr": false 00:03:35.569 } 00:03:35.570 } 00:03:35.570 }, 00:03:35.570 { 00:03:35.570 "method": "nvmf_set_max_subsystems", 00:03:35.570 "params": { 00:03:35.570 "max_subsystems": 1024 00:03:35.570 } 00:03:35.570 }, 00:03:35.570 { 00:03:35.570 "method": "nvmf_set_crdt", 00:03:35.570 "params": { 00:03:35.570 "crdt1": 0, 00:03:35.570 "crdt2": 0, 00:03:35.570 "crdt3": 0 00:03:35.570 } 00:03:35.570 }, 00:03:35.570 { 00:03:35.570 "method": "nvmf_create_transport", 00:03:35.570 "params": { 00:03:35.570 "trtype": "TCP", 00:03:35.570 "max_queue_depth": 128, 00:03:35.570 "max_io_qpairs_per_ctrlr": 127, 00:03:35.570 "in_capsule_data_size": 4096, 00:03:35.570 "max_io_size": 131072, 00:03:35.570 "io_unit_size": 131072, 00:03:35.570 "max_aq_depth": 128, 00:03:35.570 "num_shared_buffers": 511, 00:03:35.570 "buf_cache_size": 4294967295, 00:03:35.570 "dif_insert_or_strip": false, 00:03:35.570 "zcopy": false, 00:03:35.570 "c2h_success": true, 00:03:35.570 "sock_priority": 0, 00:03:35.570 "abort_timeout_sec": 1, 00:03:35.570 "ack_timeout": 0, 00:03:35.570 "data_wr_pool_size": 0 00:03:35.570 } 00:03:35.570 } 00:03:35.570 ] 00:03:35.570 }, 00:03:35.570 { 00:03:35.570 "subsystem": "iscsi", 00:03:35.570 "config": [ 00:03:35.570 { 00:03:35.570 "method": "iscsi_set_options", 00:03:35.570 "params": { 00:03:35.570 "node_base": "iqn.2016-06.io.spdk", 00:03:35.570 "max_sessions": 128, 00:03:35.570 "max_connections_per_session": 2, 00:03:35.570 "max_queue_depth": 64, 00:03:35.570 "default_time2wait": 2, 00:03:35.570 "default_time2retain": 20, 00:03:35.570 "first_burst_length": 8192, 00:03:35.570 "immediate_data": true, 00:03:35.570 "allow_duplicated_isid": false, 00:03:35.570 "error_recovery_level": 0, 00:03:35.570 "nop_timeout": 60, 00:03:35.570 "nop_in_interval": 30, 00:03:35.570 "disable_chap": false, 00:03:35.570 "require_chap": false, 00:03:35.570 "mutual_chap": false, 00:03:35.570 "chap_group": 0, 00:03:35.570 "max_large_datain_per_connection": 64, 00:03:35.570 "max_r2t_per_connection": 4, 00:03:35.570 "pdu_pool_size": 36864, 00:03:35.570 "immediate_data_pool_size": 16384, 00:03:35.570 "data_out_pool_size": 2048 00:03:35.570 } 00:03:35.570 } 00:03:35.570 ] 00:03:35.570 } 00:03:35.570 ] 00:03:35.570 } 00:03:35.570 18:46:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:35.570 18:46:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3025408 00:03:35.570 18:46:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3025408 ']' 00:03:35.570 18:46:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3025408 00:03:35.570 18:46:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:35.570 18:46:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:35.570 18:46:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3025408 00:03:35.570 18:46:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:35.570 18:46:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:35.570 18:46:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3025408' 00:03:35.570 killing process with pid 3025408 00:03:35.570 18:46:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3025408 00:03:35.570 18:46:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3025408 00:03:36.135 18:46:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3025678 00:03:36.135 18:46:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:36.135 18:46:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:41.436 18:46:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3025678 00:03:41.436 18:46:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3025678 ']' 00:03:41.436 18:46:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3025678 00:03:41.436 18:46:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:41.436 18:46:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:41.436 18:46:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3025678 00:03:41.436 18:46:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:41.436 18:46:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:41.436 18:46:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3025678' 00:03:41.436 killing process with pid 3025678 00:03:41.436 18:46:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3025678 00:03:41.436 18:46:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3025678 00:03:41.694 18:46:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:41.695 18:46:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:41.695 00:03:41.695 real 0m7.145s 00:03:41.695 user 0m6.904s 00:03:41.695 sys 0m0.744s 00:03:41.695 18:46:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:41.695 18:46:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.695 ************************************ 00:03:41.695 END TEST skip_rpc_with_json 00:03:41.695 ************************************ 00:03:41.695 18:46:19 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:41.695 18:46:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:41.695 18:46:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:41.695 18:46:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.695 ************************************ 00:03:41.695 START TEST skip_rpc_with_delay 00:03:41.695 ************************************ 00:03:41.695 18:46:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:03:41.695 18:46:19 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:41.695 18:46:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:03:41.695 18:46:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:41.695 18:46:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.695 18:46:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:41.695 18:46:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.695 18:46:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:41.695 18:46:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.695 18:46:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:41.695 18:46:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.695 18:46:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:41.695 18:46:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:41.695 [2024-07-24 18:46:19.225984] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:41.695 [2024-07-24 18:46:19.226116] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:41.695 18:46:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:03:41.695 18:46:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:41.695 18:46:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:41.695 18:46:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:41.695 00:03:41.695 real 0m0.069s 00:03:41.695 user 0m0.045s 00:03:41.695 sys 0m0.023s 00:03:41.695 18:46:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:41.695 18:46:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:41.695 ************************************ 00:03:41.695 END TEST skip_rpc_with_delay 00:03:41.695 ************************************ 00:03:41.695 18:46:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:41.695 18:46:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:41.695 18:46:19 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:41.695 18:46:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:41.695 18:46:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:41.695 18:46:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.695 ************************************ 00:03:41.695 START TEST exit_on_failed_rpc_init 00:03:41.695 ************************************ 00:03:41.695 18:46:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:03:41.695 18:46:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3026398 00:03:41.695 18:46:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:41.695 18:46:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3026398 00:03:41.695 18:46:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 3026398 ']' 00:03:41.695 18:46:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:41.695 18:46:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:41.695 18:46:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:41.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:41.695 18:46:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:41.695 18:46:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:41.953 [2024-07-24 18:46:19.334819] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:03:41.953 [2024-07-24 18:46:19.334898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3026398 ] 00:03:41.953 EAL: No free 2048 kB hugepages reported on node 1 00:03:41.953 [2024-07-24 18:46:19.391257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.953 [2024-07-24 18:46:19.502119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:42.210 18:46:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:42.211 18:46:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:03:42.211 18:46:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:42.211 18:46:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:42.211 18:46:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:03:42.211 18:46:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:42.211 18:46:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.211 18:46:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:42.211 18:46:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.211 18:46:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:42.211 18:46:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.211 18:46:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:42.211 18:46:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.211 18:46:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:42.211 18:46:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:42.468 [2024-07-24 18:46:19.818954] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:03:42.468 [2024-07-24 18:46:19.819026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3026409 ] 00:03:42.468 EAL: No free 2048 kB hugepages reported on node 1 00:03:42.469 [2024-07-24 18:46:19.879701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.469 [2024-07-24 18:46:20.000734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:03:42.469 [2024-07-24 18:46:20.000874] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:42.469 [2024-07-24 18:46:20.000897] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:42.469 [2024-07-24 18:46:20.000911] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:42.727 18:46:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:03:42.727 18:46:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:42.727 18:46:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:03:42.727 18:46:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:03:42.727 18:46:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:03:42.727 18:46:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:42.727 18:46:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:42.727 18:46:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3026398 00:03:42.727 18:46:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 3026398 ']' 00:03:42.727 18:46:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 3026398 00:03:42.727 18:46:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:03:42.727 18:46:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:42.727 18:46:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3026398 00:03:42.727 18:46:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:42.727 18:46:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:42.727 18:46:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3026398' 00:03:42.727 killing process with pid 3026398 00:03:42.727 18:46:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 3026398 00:03:42.727 18:46:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 3026398 00:03:43.293 00:03:43.293 real 0m1.342s 00:03:43.293 user 0m1.529s 00:03:43.293 sys 0m0.436s 00:03:43.293 18:46:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.293 18:46:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:43.293 ************************************ 00:03:43.293 END TEST exit_on_failed_rpc_init 00:03:43.293 ************************************ 00:03:43.293 18:46:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:43.293 00:03:43.293 real 0m14.299s 00:03:43.293 user 0m13.756s 00:03:43.293 sys 0m1.681s 00:03:43.293 18:46:20 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.293 18:46:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:43.293 ************************************ 00:03:43.293 END TEST skip_rpc 00:03:43.293 ************************************ 00:03:43.293 18:46:20 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:43.293 18:46:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.293 18:46:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.293 18:46:20 -- common/autotest_common.sh@10 -- # set +x 00:03:43.293 ************************************ 00:03:43.293 START TEST rpc_client 00:03:43.293 ************************************ 00:03:43.293 18:46:20 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:43.293 * Looking for test storage... 00:03:43.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:43.293 18:46:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:43.293 OK 00:03:43.293 18:46:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:43.293 00:03:43.293 real 0m0.057s 00:03:43.293 user 0m0.024s 00:03:43.293 sys 0m0.037s 00:03:43.293 18:46:20 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.293 18:46:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:43.293 ************************************ 00:03:43.293 END TEST rpc_client 00:03:43.293 ************************************ 00:03:43.293 18:46:20 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:43.293 18:46:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.293 18:46:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.293 18:46:20 -- common/autotest_common.sh@10 -- # set +x 00:03:43.293 ************************************ 00:03:43.293 START TEST json_config 00:03:43.293 ************************************ 00:03:43.293 18:46:20 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:43.293 18:46:20 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:43.293 18:46:20 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:43.293 18:46:20 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:43.293 18:46:20 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:43.293 18:46:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.293 18:46:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.293 18:46:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.293 18:46:20 json_config -- paths/export.sh@5 -- # export PATH 00:03:43.293 18:46:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@47 -- # : 0 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:43.293 18:46:20 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:43.293 18:46:20 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:43.293 18:46:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:43.293 18:46:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:43.293 18:46:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:43.293 18:46:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:43.293 18:46:20 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:43.293 18:46:20 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:43.293 18:46:20 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:43.293 18:46:20 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:43.294 18:46:20 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:43.294 18:46:20 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:43.294 18:46:20 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:43.294 18:46:20 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:43.294 18:46:20 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:43.294 18:46:20 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:43.294 18:46:20 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:03:43.294 INFO: JSON configuration test init 00:03:43.294 18:46:20 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:03:43.294 18:46:20 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:03:43.294 18:46:20 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:43.294 18:46:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:43.294 18:46:20 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:03:43.294 18:46:20 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:43.294 18:46:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:43.294 18:46:20 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:03:43.294 18:46:20 json_config -- json_config/common.sh@9 -- # local app=target 00:03:43.294 18:46:20 json_config -- json_config/common.sh@10 -- # shift 00:03:43.294 18:46:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:43.294 18:46:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:43.294 18:46:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:43.294 18:46:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:43.294 18:46:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:43.294 18:46:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3026664 00:03:43.294 18:46:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:43.294 18:46:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:43.294 Waiting for target to run... 00:03:43.294 18:46:20 json_config -- json_config/common.sh@25 -- # waitforlisten 3026664 /var/tmp/spdk_tgt.sock 00:03:43.294 18:46:20 json_config -- common/autotest_common.sh@831 -- # '[' -z 3026664 ']' 00:03:43.294 18:46:20 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:43.294 18:46:20 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:43.294 18:46:20 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:43.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:43.294 18:46:20 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:43.294 18:46:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:43.552 [2024-07-24 18:46:20.905619] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:03:43.552 [2024-07-24 18:46:20.905724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3026664 ] 00:03:43.552 EAL: No free 2048 kB hugepages reported on node 1 00:03:44.117 [2024-07-24 18:46:21.439815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:44.117 [2024-07-24 18:46:21.547461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:44.374 18:46:21 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:44.374 18:46:21 json_config -- common/autotest_common.sh@864 -- # return 0 00:03:44.374 18:46:21 json_config -- json_config/common.sh@26 -- # echo '' 00:03:44.374 00:03:44.374 18:46:21 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:03:44.374 18:46:21 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:03:44.374 18:46:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:44.374 18:46:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.374 18:46:21 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:03:44.374 18:46:21 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:03:44.374 18:46:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:44.374 18:46:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.374 18:46:21 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:44.374 18:46:21 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:03:44.374 18:46:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:47.656 18:46:25 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:03:47.656 18:46:25 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:47.656 18:46:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:47.656 18:46:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.656 18:46:25 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:47.656 18:46:25 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:47.656 18:46:25 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:47.656 18:46:25 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:03:47.656 18:46:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:47.656 18:46:25 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:03:47.914 18:46:25 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:47.914 18:46:25 json_config -- json_config/json_config.sh@48 -- # local get_types 00:03:47.914 18:46:25 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:03:47.914 18:46:25 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:03:47.914 18:46:25 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:03:47.914 18:46:25 json_config -- json_config/json_config.sh@51 -- # sort 00:03:47.914 18:46:25 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:03:47.914 18:46:25 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:03:47.914 18:46:25 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:03:47.914 18:46:25 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:03:47.914 18:46:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:47.914 18:46:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.914 18:46:25 json_config -- json_config/json_config.sh@59 -- # return 0 00:03:47.914 18:46:25 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:03:47.914 18:46:25 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:03:47.914 18:46:25 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:03:47.914 18:46:25 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:03:47.914 18:46:25 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:03:47.914 18:46:25 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:03:47.914 18:46:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:47.914 18:46:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.914 18:46:25 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:47.914 18:46:25 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:03:47.914 18:46:25 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:03:47.914 18:46:25 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:47.914 18:46:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:48.172 MallocForNvmf0 00:03:48.172 18:46:25 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:48.172 18:46:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:48.430 MallocForNvmf1 00:03:48.430 18:46:25 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:48.430 18:46:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:48.430 [2024-07-24 18:46:26.022368] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:48.688 18:46:26 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:48.688 18:46:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:48.688 18:46:26 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:48.688 18:46:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:48.945 18:46:26 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:48.945 18:46:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:49.202 18:46:26 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:49.203 18:46:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:49.460 [2024-07-24 18:46:26.973544] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:49.460 18:46:26 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:03:49.460 18:46:26 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:49.460 18:46:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.460 18:46:27 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:03:49.460 18:46:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:49.460 18:46:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.460 18:46:27 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:03:49.460 18:46:27 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:49.460 18:46:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:49.717 MallocBdevForConfigChangeCheck 00:03:49.717 18:46:27 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:03:49.717 18:46:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:49.717 18:46:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.717 18:46:27 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:03:49.717 18:46:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:50.279 18:46:27 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:03:50.280 INFO: shutting down applications... 00:03:50.280 18:46:27 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:03:50.280 18:46:27 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:03:50.280 18:46:27 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:03:50.280 18:46:27 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:52.175 Calling clear_iscsi_subsystem 00:03:52.175 Calling clear_nvmf_subsystem 00:03:52.175 Calling clear_nbd_subsystem 00:03:52.175 Calling clear_ublk_subsystem 00:03:52.175 Calling clear_vhost_blk_subsystem 00:03:52.175 Calling clear_vhost_scsi_subsystem 00:03:52.175 Calling clear_bdev_subsystem 00:03:52.175 18:46:29 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:52.175 18:46:29 json_config -- json_config/json_config.sh@347 -- # count=100 00:03:52.175 18:46:29 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:03:52.175 18:46:29 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:52.175 18:46:29 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:52.175 18:46:29 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:52.175 18:46:29 json_config -- json_config/json_config.sh@349 -- # break 00:03:52.175 18:46:29 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:03:52.175 18:46:29 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:03:52.175 18:46:29 json_config -- json_config/common.sh@31 -- # local app=target 00:03:52.175 18:46:29 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:52.175 18:46:29 json_config -- json_config/common.sh@35 -- # [[ -n 3026664 ]] 00:03:52.175 18:46:29 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3026664 00:03:52.175 18:46:29 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:52.175 18:46:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:52.175 18:46:29 json_config -- json_config/common.sh@41 -- # kill -0 3026664 00:03:52.175 18:46:29 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:52.742 18:46:30 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:52.742 18:46:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:52.742 18:46:30 json_config -- json_config/common.sh@41 -- # kill -0 3026664 00:03:52.742 18:46:30 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:52.742 18:46:30 json_config -- json_config/common.sh@43 -- # break 00:03:52.742 18:46:30 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:52.742 18:46:30 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:52.742 SPDK target shutdown done 00:03:52.742 18:46:30 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:03:52.742 INFO: relaunching applications... 00:03:52.742 18:46:30 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:52.742 18:46:30 json_config -- json_config/common.sh@9 -- # local app=target 00:03:52.742 18:46:30 json_config -- json_config/common.sh@10 -- # shift 00:03:52.742 18:46:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:52.742 18:46:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:52.742 18:46:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:52.742 18:46:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:52.742 18:46:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:52.742 18:46:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3027858 00:03:52.742 18:46:30 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:52.742 18:46:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:52.742 Waiting for target to run... 00:03:52.742 18:46:30 json_config -- json_config/common.sh@25 -- # waitforlisten 3027858 /var/tmp/spdk_tgt.sock 00:03:52.742 18:46:30 json_config -- common/autotest_common.sh@831 -- # '[' -z 3027858 ']' 00:03:52.742 18:46:30 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:52.742 18:46:30 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:52.742 18:46:30 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:52.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:52.742 18:46:30 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:52.742 18:46:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:52.742 [2024-07-24 18:46:30.284062] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:03:52.742 [2024-07-24 18:46:30.284149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027858 ] 00:03:52.742 EAL: No free 2048 kB hugepages reported on node 1 00:03:53.308 [2024-07-24 18:46:30.802699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.566 [2024-07-24 18:46:30.911398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.853 [2024-07-24 18:46:33.954108] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:56.853 [2024-07-24 18:46:33.986565] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:57.110 18:46:34 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:57.110 18:46:34 json_config -- common/autotest_common.sh@864 -- # return 0 00:03:57.110 18:46:34 json_config -- json_config/common.sh@26 -- # echo '' 00:03:57.110 00:03:57.110 18:46:34 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:03:57.110 18:46:34 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:57.110 INFO: Checking if target configuration is the same... 00:03:57.110 18:46:34 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:57.110 18:46:34 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:03:57.110 18:46:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:57.110 + '[' 2 -ne 2 ']' 00:03:57.110 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:57.110 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:57.110 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:57.110 +++ basename /dev/fd/62 00:03:57.110 ++ mktemp /tmp/62.XXX 00:03:57.110 + tmp_file_1=/tmp/62.NGl 00:03:57.110 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:57.110 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:57.110 + tmp_file_2=/tmp/spdk_tgt_config.json.XNq 00:03:57.110 + ret=0 00:03:57.110 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:57.676 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:57.676 + diff -u /tmp/62.NGl /tmp/spdk_tgt_config.json.XNq 00:03:57.676 + echo 'INFO: JSON config files are the same' 00:03:57.676 INFO: JSON config files are the same 00:03:57.676 + rm /tmp/62.NGl /tmp/spdk_tgt_config.json.XNq 00:03:57.676 + exit 0 00:03:57.676 18:46:35 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:03:57.676 18:46:35 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:57.676 INFO: changing configuration and checking if this can be detected... 00:03:57.676 18:46:35 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:57.676 18:46:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:57.934 18:46:35 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:57.934 18:46:35 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:03:57.934 18:46:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:57.934 + '[' 2 -ne 2 ']' 00:03:57.934 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:57.934 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:57.934 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:57.934 +++ basename /dev/fd/62 00:03:57.934 ++ mktemp /tmp/62.XXX 00:03:57.934 + tmp_file_1=/tmp/62.6iA 00:03:57.934 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:57.934 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:57.934 + tmp_file_2=/tmp/spdk_tgt_config.json.ZAV 00:03:57.934 + ret=0 00:03:57.934 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:58.192 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:58.450 + diff -u /tmp/62.6iA /tmp/spdk_tgt_config.json.ZAV 00:03:58.450 + ret=1 00:03:58.450 + echo '=== Start of file: /tmp/62.6iA ===' 00:03:58.450 + cat /tmp/62.6iA 00:03:58.450 + echo '=== End of file: /tmp/62.6iA ===' 00:03:58.450 + echo '' 00:03:58.450 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ZAV ===' 00:03:58.450 + cat /tmp/spdk_tgt_config.json.ZAV 00:03:58.450 + echo '=== End of file: /tmp/spdk_tgt_config.json.ZAV ===' 00:03:58.450 + echo '' 00:03:58.450 + rm /tmp/62.6iA /tmp/spdk_tgt_config.json.ZAV 00:03:58.450 + exit 1 00:03:58.450 18:46:35 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:03:58.450 INFO: configuration change detected. 00:03:58.450 18:46:35 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:03:58.450 18:46:35 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:03:58.450 18:46:35 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:58.450 18:46:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.450 18:46:35 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:03:58.450 18:46:35 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:03:58.450 18:46:35 json_config -- json_config/json_config.sh@321 -- # [[ -n 3027858 ]] 00:03:58.450 18:46:35 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:03:58.450 18:46:35 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:03:58.450 18:46:35 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:58.450 18:46:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.450 18:46:35 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:03:58.450 18:46:35 json_config -- json_config/json_config.sh@197 -- # uname -s 00:03:58.450 18:46:35 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:03:58.450 18:46:35 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:03:58.450 18:46:35 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:03:58.450 18:46:35 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:03:58.450 18:46:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:58.450 18:46:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.450 18:46:35 json_config -- json_config/json_config.sh@327 -- # killprocess 3027858 00:03:58.450 18:46:35 json_config -- common/autotest_common.sh@950 -- # '[' -z 3027858 ']' 00:03:58.450 18:46:35 json_config -- common/autotest_common.sh@954 -- # kill -0 3027858 00:03:58.450 18:46:35 json_config -- common/autotest_common.sh@955 -- # uname 00:03:58.450 18:46:35 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:58.450 18:46:35 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3027858 00:03:58.450 18:46:35 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:58.450 18:46:35 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:58.450 18:46:35 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3027858' 00:03:58.450 killing process with pid 3027858 00:03:58.450 18:46:35 json_config -- common/autotest_common.sh@969 -- # kill 3027858 00:03:58.450 18:46:35 json_config -- common/autotest_common.sh@974 -- # wait 3027858 00:04:00.349 18:46:37 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:00.349 18:46:37 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:04:00.349 18:46:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:00.349 18:46:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.349 18:46:37 json_config -- json_config/json_config.sh@332 -- # return 0 00:04:00.350 18:46:37 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:04:00.350 INFO: Success 00:04:00.350 00:04:00.350 real 0m16.739s 00:04:00.350 user 0m18.525s 00:04:00.350 sys 0m2.252s 00:04:00.350 18:46:37 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.350 18:46:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.350 ************************************ 00:04:00.350 END TEST json_config 00:04:00.350 ************************************ 00:04:00.350 18:46:37 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:00.350 18:46:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:00.350 18:46:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:00.350 18:46:37 -- common/autotest_common.sh@10 -- # set +x 00:04:00.350 ************************************ 00:04:00.350 START TEST json_config_extra_key 00:04:00.350 ************************************ 00:04:00.350 18:46:37 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:00.350 18:46:37 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:00.350 18:46:37 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:00.350 18:46:37 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:00.350 18:46:37 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:00.350 18:46:37 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.350 18:46:37 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.350 18:46:37 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.350 18:46:37 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:00.350 18:46:37 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:00.350 18:46:37 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:00.350 18:46:37 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:00.350 18:46:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:00.350 18:46:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:00.350 18:46:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:00.350 18:46:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:00.350 18:46:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:00.350 18:46:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:00.350 18:46:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:00.350 18:46:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:00.350 18:46:37 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:00.350 18:46:37 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:00.350 INFO: launching applications... 00:04:00.350 18:46:37 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:00.350 18:46:37 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:00.350 18:46:37 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:00.350 18:46:37 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:00.350 18:46:37 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:00.350 18:46:37 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:00.350 18:46:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:00.350 18:46:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:00.350 18:46:37 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3028906 00:04:00.350 18:46:37 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:00.350 18:46:37 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:00.350 Waiting for target to run... 00:04:00.350 18:46:37 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3028906 /var/tmp/spdk_tgt.sock 00:04:00.350 18:46:37 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 3028906 ']' 00:04:00.350 18:46:37 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:00.350 18:46:37 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:00.350 18:46:37 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:00.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:00.350 18:46:37 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:00.350 18:46:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:00.350 [2024-07-24 18:46:37.680729] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:00.350 [2024-07-24 18:46:37.680822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3028906 ] 00:04:00.350 EAL: No free 2048 kB hugepages reported on node 1 00:04:00.609 [2024-07-24 18:46:38.025973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.609 [2024-07-24 18:46:38.114806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.175 18:46:38 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:01.175 18:46:38 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:01.175 18:46:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:01.175 00:04:01.175 18:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:01.175 INFO: shutting down applications... 00:04:01.175 18:46:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:01.175 18:46:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:01.175 18:46:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:01.175 18:46:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3028906 ]] 00:04:01.175 18:46:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3028906 00:04:01.175 18:46:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:01.175 18:46:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:01.175 18:46:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3028906 00:04:01.175 18:46:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:01.774 18:46:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:01.774 18:46:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:01.774 18:46:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3028906 00:04:01.774 18:46:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:02.360 18:46:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:02.360 18:46:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:02.360 18:46:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3028906 00:04:02.360 18:46:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:02.360 18:46:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:02.360 18:46:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:02.360 18:46:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:02.360 SPDK target shutdown done 00:04:02.360 18:46:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:02.360 Success 00:04:02.360 00:04:02.360 real 0m2.077s 00:04:02.360 user 0m1.654s 00:04:02.360 sys 0m0.418s 00:04:02.360 18:46:39 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.360 18:46:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:02.360 ************************************ 00:04:02.360 END TEST json_config_extra_key 00:04:02.360 ************************************ 00:04:02.360 18:46:39 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:02.360 18:46:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.360 18:46:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.360 18:46:39 -- common/autotest_common.sh@10 -- # set +x 00:04:02.360 ************************************ 00:04:02.360 START TEST alias_rpc 00:04:02.360 ************************************ 00:04:02.360 18:46:39 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:02.360 * Looking for test storage... 00:04:02.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:02.360 18:46:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:02.360 18:46:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3029220 00:04:02.360 18:46:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.360 18:46:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3029220 00:04:02.360 18:46:39 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 3029220 ']' 00:04:02.360 18:46:39 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.360 18:46:39 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:02.360 18:46:39 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.360 18:46:39 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:02.360 18:46:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.360 [2024-07-24 18:46:39.803186] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:02.360 [2024-07-24 18:46:39.803295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3029220 ] 00:04:02.360 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.360 [2024-07-24 18:46:39.869027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.619 [2024-07-24 18:46:39.988170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.877 18:46:40 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:02.877 18:46:40 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:02.877 18:46:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:03.135 18:46:40 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3029220 00:04:03.135 18:46:40 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 3029220 ']' 00:04:03.135 18:46:40 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 3029220 00:04:03.135 18:46:40 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:03.135 18:46:40 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:03.135 18:46:40 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3029220 00:04:03.135 18:46:40 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:03.135 18:46:40 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:03.135 18:46:40 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3029220' 00:04:03.135 killing process with pid 3029220 00:04:03.135 18:46:40 alias_rpc -- common/autotest_common.sh@969 -- # kill 3029220 00:04:03.135 18:46:40 alias_rpc -- common/autotest_common.sh@974 -- # wait 3029220 00:04:03.702 00:04:03.702 real 0m1.314s 00:04:03.702 user 0m1.397s 00:04:03.702 sys 0m0.438s 00:04:03.702 18:46:41 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.702 18:46:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.702 ************************************ 00:04:03.702 END TEST alias_rpc 00:04:03.702 ************************************ 00:04:03.702 18:46:41 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:03.702 18:46:41 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:03.702 18:46:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.702 18:46:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.702 18:46:41 -- common/autotest_common.sh@10 -- # set +x 00:04:03.702 ************************************ 00:04:03.702 START TEST spdkcli_tcp 00:04:03.702 ************************************ 00:04:03.702 18:46:41 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:03.702 * Looking for test storage... 00:04:03.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:03.702 18:46:41 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:03.702 18:46:41 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:03.702 18:46:41 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:03.702 18:46:41 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:03.702 18:46:41 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:03.702 18:46:41 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:03.702 18:46:41 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:03.702 18:46:41 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.702 18:46:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:03.702 18:46:41 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3029408 00:04:03.702 18:46:41 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:03.702 18:46:41 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3029408 00:04:03.702 18:46:41 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 3029408 ']' 00:04:03.702 18:46:41 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.702 18:46:41 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:03.702 18:46:41 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.702 18:46:41 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:03.702 18:46:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:03.702 [2024-07-24 18:46:41.168500] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:03.702 [2024-07-24 18:46:41.168577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3029408 ] 00:04:03.702 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.702 [2024-07-24 18:46:41.224433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:03.961 [2024-07-24 18:46:41.331857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:03.961 [2024-07-24 18:46:41.331860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.219 18:46:41 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:04.219 18:46:41 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:04.219 18:46:41 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3029418 00:04:04.219 18:46:41 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:04.219 18:46:41 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:04.477 [ 00:04:04.477 "bdev_malloc_delete", 00:04:04.477 "bdev_malloc_create", 00:04:04.477 "bdev_null_resize", 00:04:04.477 "bdev_null_delete", 00:04:04.477 "bdev_null_create", 00:04:04.477 "bdev_nvme_cuse_unregister", 00:04:04.477 "bdev_nvme_cuse_register", 00:04:04.477 "bdev_opal_new_user", 00:04:04.477 "bdev_opal_set_lock_state", 00:04:04.477 "bdev_opal_delete", 00:04:04.477 "bdev_opal_get_info", 00:04:04.477 "bdev_opal_create", 00:04:04.477 "bdev_nvme_opal_revert", 00:04:04.477 "bdev_nvme_opal_init", 00:04:04.477 "bdev_nvme_send_cmd", 00:04:04.477 "bdev_nvme_get_path_iostat", 00:04:04.477 "bdev_nvme_get_mdns_discovery_info", 00:04:04.477 "bdev_nvme_stop_mdns_discovery", 00:04:04.477 "bdev_nvme_start_mdns_discovery", 00:04:04.477 "bdev_nvme_set_multipath_policy", 00:04:04.477 "bdev_nvme_set_preferred_path", 00:04:04.477 "bdev_nvme_get_io_paths", 00:04:04.477 "bdev_nvme_remove_error_injection", 00:04:04.477 "bdev_nvme_add_error_injection", 00:04:04.477 "bdev_nvme_get_discovery_info", 00:04:04.477 "bdev_nvme_stop_discovery", 00:04:04.477 "bdev_nvme_start_discovery", 00:04:04.477 "bdev_nvme_get_controller_health_info", 00:04:04.477 "bdev_nvme_disable_controller", 00:04:04.477 "bdev_nvme_enable_controller", 00:04:04.477 "bdev_nvme_reset_controller", 00:04:04.477 "bdev_nvme_get_transport_statistics", 00:04:04.477 "bdev_nvme_apply_firmware", 00:04:04.477 "bdev_nvme_detach_controller", 00:04:04.477 "bdev_nvme_get_controllers", 00:04:04.477 "bdev_nvme_attach_controller", 00:04:04.477 "bdev_nvme_set_hotplug", 00:04:04.477 "bdev_nvme_set_options", 00:04:04.477 "bdev_passthru_delete", 00:04:04.477 "bdev_passthru_create", 00:04:04.477 "bdev_lvol_set_parent_bdev", 00:04:04.477 "bdev_lvol_set_parent", 00:04:04.477 "bdev_lvol_check_shallow_copy", 00:04:04.477 "bdev_lvol_start_shallow_copy", 00:04:04.477 "bdev_lvol_grow_lvstore", 00:04:04.477 "bdev_lvol_get_lvols", 00:04:04.477 "bdev_lvol_get_lvstores", 00:04:04.477 "bdev_lvol_delete", 00:04:04.477 "bdev_lvol_set_read_only", 00:04:04.477 "bdev_lvol_resize", 00:04:04.477 "bdev_lvol_decouple_parent", 00:04:04.478 "bdev_lvol_inflate", 00:04:04.478 "bdev_lvol_rename", 00:04:04.478 "bdev_lvol_clone_bdev", 00:04:04.478 "bdev_lvol_clone", 00:04:04.478 "bdev_lvol_snapshot", 00:04:04.478 "bdev_lvol_create", 00:04:04.478 "bdev_lvol_delete_lvstore", 00:04:04.478 "bdev_lvol_rename_lvstore", 00:04:04.478 "bdev_lvol_create_lvstore", 00:04:04.478 "bdev_raid_set_options", 00:04:04.478 "bdev_raid_remove_base_bdev", 00:04:04.478 "bdev_raid_add_base_bdev", 00:04:04.478 "bdev_raid_delete", 00:04:04.478 "bdev_raid_create", 00:04:04.478 "bdev_raid_get_bdevs", 00:04:04.478 "bdev_error_inject_error", 00:04:04.478 "bdev_error_delete", 00:04:04.478 "bdev_error_create", 00:04:04.478 "bdev_split_delete", 00:04:04.478 "bdev_split_create", 00:04:04.478 "bdev_delay_delete", 00:04:04.478 "bdev_delay_create", 00:04:04.478 "bdev_delay_update_latency", 00:04:04.478 "bdev_zone_block_delete", 00:04:04.478 "bdev_zone_block_create", 00:04:04.478 "blobfs_create", 00:04:04.478 "blobfs_detect", 00:04:04.478 "blobfs_set_cache_size", 00:04:04.478 "bdev_aio_delete", 00:04:04.478 "bdev_aio_rescan", 00:04:04.478 "bdev_aio_create", 00:04:04.478 "bdev_ftl_set_property", 00:04:04.478 "bdev_ftl_get_properties", 00:04:04.478 "bdev_ftl_get_stats", 00:04:04.478 "bdev_ftl_unmap", 00:04:04.478 "bdev_ftl_unload", 00:04:04.478 "bdev_ftl_delete", 00:04:04.478 "bdev_ftl_load", 00:04:04.478 "bdev_ftl_create", 00:04:04.478 "bdev_virtio_attach_controller", 00:04:04.478 "bdev_virtio_scsi_get_devices", 00:04:04.478 "bdev_virtio_detach_controller", 00:04:04.478 "bdev_virtio_blk_set_hotplug", 00:04:04.478 "bdev_iscsi_delete", 00:04:04.478 "bdev_iscsi_create", 00:04:04.478 "bdev_iscsi_set_options", 00:04:04.478 "accel_error_inject_error", 00:04:04.478 "ioat_scan_accel_module", 00:04:04.478 "dsa_scan_accel_module", 00:04:04.478 "iaa_scan_accel_module", 00:04:04.478 "vfu_virtio_create_scsi_endpoint", 00:04:04.478 "vfu_virtio_scsi_remove_target", 00:04:04.478 "vfu_virtio_scsi_add_target", 00:04:04.478 "vfu_virtio_create_blk_endpoint", 00:04:04.478 "vfu_virtio_delete_endpoint", 00:04:04.478 "keyring_file_remove_key", 00:04:04.478 "keyring_file_add_key", 00:04:04.478 "keyring_linux_set_options", 00:04:04.478 "iscsi_get_histogram", 00:04:04.478 "iscsi_enable_histogram", 00:04:04.478 "iscsi_set_options", 00:04:04.478 "iscsi_get_auth_groups", 00:04:04.478 "iscsi_auth_group_remove_secret", 00:04:04.478 "iscsi_auth_group_add_secret", 00:04:04.478 "iscsi_delete_auth_group", 00:04:04.478 "iscsi_create_auth_group", 00:04:04.478 "iscsi_set_discovery_auth", 00:04:04.478 "iscsi_get_options", 00:04:04.478 "iscsi_target_node_request_logout", 00:04:04.478 "iscsi_target_node_set_redirect", 00:04:04.478 "iscsi_target_node_set_auth", 00:04:04.478 "iscsi_target_node_add_lun", 00:04:04.478 "iscsi_get_stats", 00:04:04.478 "iscsi_get_connections", 00:04:04.478 "iscsi_portal_group_set_auth", 00:04:04.478 "iscsi_start_portal_group", 00:04:04.478 "iscsi_delete_portal_group", 00:04:04.478 "iscsi_create_portal_group", 00:04:04.478 "iscsi_get_portal_groups", 00:04:04.478 "iscsi_delete_target_node", 00:04:04.478 "iscsi_target_node_remove_pg_ig_maps", 00:04:04.478 "iscsi_target_node_add_pg_ig_maps", 00:04:04.478 "iscsi_create_target_node", 00:04:04.478 "iscsi_get_target_nodes", 00:04:04.478 "iscsi_delete_initiator_group", 00:04:04.478 "iscsi_initiator_group_remove_initiators", 00:04:04.478 "iscsi_initiator_group_add_initiators", 00:04:04.478 "iscsi_create_initiator_group", 00:04:04.478 "iscsi_get_initiator_groups", 00:04:04.478 "nvmf_set_crdt", 00:04:04.478 "nvmf_set_config", 00:04:04.478 "nvmf_set_max_subsystems", 00:04:04.478 "nvmf_stop_mdns_prr", 00:04:04.478 "nvmf_publish_mdns_prr", 00:04:04.478 "nvmf_subsystem_get_listeners", 00:04:04.478 "nvmf_subsystem_get_qpairs", 00:04:04.478 "nvmf_subsystem_get_controllers", 00:04:04.478 "nvmf_get_stats", 00:04:04.478 "nvmf_get_transports", 00:04:04.478 "nvmf_create_transport", 00:04:04.478 "nvmf_get_targets", 00:04:04.478 "nvmf_delete_target", 00:04:04.478 "nvmf_create_target", 00:04:04.478 "nvmf_subsystem_allow_any_host", 00:04:04.478 "nvmf_subsystem_remove_host", 00:04:04.478 "nvmf_subsystem_add_host", 00:04:04.478 "nvmf_ns_remove_host", 00:04:04.478 "nvmf_ns_add_host", 00:04:04.478 "nvmf_subsystem_remove_ns", 00:04:04.478 "nvmf_subsystem_add_ns", 00:04:04.478 "nvmf_subsystem_listener_set_ana_state", 00:04:04.478 "nvmf_discovery_get_referrals", 00:04:04.478 "nvmf_discovery_remove_referral", 00:04:04.478 "nvmf_discovery_add_referral", 00:04:04.478 "nvmf_subsystem_remove_listener", 00:04:04.478 "nvmf_subsystem_add_listener", 00:04:04.478 "nvmf_delete_subsystem", 00:04:04.478 "nvmf_create_subsystem", 00:04:04.478 "nvmf_get_subsystems", 00:04:04.478 "env_dpdk_get_mem_stats", 00:04:04.478 "nbd_get_disks", 00:04:04.478 "nbd_stop_disk", 00:04:04.478 "nbd_start_disk", 00:04:04.478 "ublk_recover_disk", 00:04:04.478 "ublk_get_disks", 00:04:04.478 "ublk_stop_disk", 00:04:04.478 "ublk_start_disk", 00:04:04.478 "ublk_destroy_target", 00:04:04.478 "ublk_create_target", 00:04:04.478 "virtio_blk_create_transport", 00:04:04.478 "virtio_blk_get_transports", 00:04:04.478 "vhost_controller_set_coalescing", 00:04:04.478 "vhost_get_controllers", 00:04:04.478 "vhost_delete_controller", 00:04:04.478 "vhost_create_blk_controller", 00:04:04.478 "vhost_scsi_controller_remove_target", 00:04:04.478 "vhost_scsi_controller_add_target", 00:04:04.478 "vhost_start_scsi_controller", 00:04:04.478 "vhost_create_scsi_controller", 00:04:04.478 "thread_set_cpumask", 00:04:04.478 "framework_get_governor", 00:04:04.478 "framework_get_scheduler", 00:04:04.478 "framework_set_scheduler", 00:04:04.478 "framework_get_reactors", 00:04:04.478 "thread_get_io_channels", 00:04:04.478 "thread_get_pollers", 00:04:04.478 "thread_get_stats", 00:04:04.478 "framework_monitor_context_switch", 00:04:04.478 "spdk_kill_instance", 00:04:04.478 "log_enable_timestamps", 00:04:04.478 "log_get_flags", 00:04:04.478 "log_clear_flag", 00:04:04.478 "log_set_flag", 00:04:04.478 "log_get_level", 00:04:04.478 "log_set_level", 00:04:04.478 "log_get_print_level", 00:04:04.478 "log_set_print_level", 00:04:04.478 "framework_enable_cpumask_locks", 00:04:04.478 "framework_disable_cpumask_locks", 00:04:04.478 "framework_wait_init", 00:04:04.478 "framework_start_init", 00:04:04.478 "scsi_get_devices", 00:04:04.478 "bdev_get_histogram", 00:04:04.478 "bdev_enable_histogram", 00:04:04.478 "bdev_set_qos_limit", 00:04:04.478 "bdev_set_qd_sampling_period", 00:04:04.478 "bdev_get_bdevs", 00:04:04.478 "bdev_reset_iostat", 00:04:04.478 "bdev_get_iostat", 00:04:04.478 "bdev_examine", 00:04:04.478 "bdev_wait_for_examine", 00:04:04.478 "bdev_set_options", 00:04:04.478 "notify_get_notifications", 00:04:04.478 "notify_get_types", 00:04:04.478 "accel_get_stats", 00:04:04.478 "accel_set_options", 00:04:04.478 "accel_set_driver", 00:04:04.478 "accel_crypto_key_destroy", 00:04:04.478 "accel_crypto_keys_get", 00:04:04.478 "accel_crypto_key_create", 00:04:04.478 "accel_assign_opc", 00:04:04.478 "accel_get_module_info", 00:04:04.478 "accel_get_opc_assignments", 00:04:04.478 "vmd_rescan", 00:04:04.478 "vmd_remove_device", 00:04:04.478 "vmd_enable", 00:04:04.478 "sock_get_default_impl", 00:04:04.478 "sock_set_default_impl", 00:04:04.478 "sock_impl_set_options", 00:04:04.478 "sock_impl_get_options", 00:04:04.478 "iobuf_get_stats", 00:04:04.478 "iobuf_set_options", 00:04:04.478 "keyring_get_keys", 00:04:04.478 "framework_get_pci_devices", 00:04:04.478 "framework_get_config", 00:04:04.478 "framework_get_subsystems", 00:04:04.478 "vfu_tgt_set_base_path", 00:04:04.478 "trace_get_info", 00:04:04.478 "trace_get_tpoint_group_mask", 00:04:04.478 "trace_disable_tpoint_group", 00:04:04.478 "trace_enable_tpoint_group", 00:04:04.478 "trace_clear_tpoint_mask", 00:04:04.478 "trace_set_tpoint_mask", 00:04:04.478 "spdk_get_version", 00:04:04.478 "rpc_get_methods" 00:04:04.478 ] 00:04:04.478 18:46:41 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:04.478 18:46:41 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:04.478 18:46:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:04.478 18:46:41 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:04.478 18:46:41 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3029408 00:04:04.478 18:46:41 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 3029408 ']' 00:04:04.478 18:46:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 3029408 00:04:04.478 18:46:41 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:04.478 18:46:41 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:04.478 18:46:41 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3029408 00:04:04.478 18:46:41 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:04.478 18:46:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:04.479 18:46:41 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3029408' 00:04:04.479 killing process with pid 3029408 00:04:04.479 18:46:41 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 3029408 00:04:04.479 18:46:41 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 3029408 00:04:05.045 00:04:05.045 real 0m1.330s 00:04:05.045 user 0m2.297s 00:04:05.045 sys 0m0.464s 00:04:05.045 18:46:42 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.045 18:46:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:05.045 ************************************ 00:04:05.045 END TEST spdkcli_tcp 00:04:05.045 ************************************ 00:04:05.045 18:46:42 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:05.045 18:46:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.045 18:46:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.045 18:46:42 -- common/autotest_common.sh@10 -- # set +x 00:04:05.045 ************************************ 00:04:05.045 START TEST dpdk_mem_utility 00:04:05.045 ************************************ 00:04:05.045 18:46:42 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:05.045 * Looking for test storage... 00:04:05.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:05.045 18:46:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:05.045 18:46:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3029614 00:04:05.045 18:46:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.045 18:46:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3029614 00:04:05.045 18:46:42 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 3029614 ']' 00:04:05.045 18:46:42 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.045 18:46:42 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:05.045 18:46:42 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.045 18:46:42 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:05.045 18:46:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:05.045 [2024-07-24 18:46:42.542369] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:05.045 [2024-07-24 18:46:42.542463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3029614 ] 00:04:05.045 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.045 [2024-07-24 18:46:42.598790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.303 [2024-07-24 18:46:42.706284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.562 18:46:42 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:05.562 18:46:42 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:05.562 18:46:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:05.562 18:46:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:05.562 18:46:42 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:05.562 18:46:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:05.562 { 00:04:05.562 "filename": "/tmp/spdk_mem_dump.txt" 00:04:05.562 } 00:04:05.562 18:46:42 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:05.562 18:46:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:05.562 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:05.562 1 heaps totaling size 814.000000 MiB 00:04:05.562 size: 814.000000 MiB heap id: 0 00:04:05.562 end heaps---------- 00:04:05.562 8 mempools totaling size 598.116089 MiB 00:04:05.562 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:05.562 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:05.562 size: 84.521057 MiB name: bdev_io_3029614 00:04:05.562 size: 51.011292 MiB name: evtpool_3029614 00:04:05.562 size: 50.003479 MiB name: msgpool_3029614 00:04:05.562 size: 21.763794 MiB name: PDU_Pool 00:04:05.562 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:05.562 size: 0.026123 MiB name: Session_Pool 00:04:05.562 end mempools------- 00:04:05.562 6 memzones totaling size 4.142822 MiB 00:04:05.562 size: 1.000366 MiB name: RG_ring_0_3029614 00:04:05.562 size: 1.000366 MiB name: RG_ring_1_3029614 00:04:05.562 size: 1.000366 MiB name: RG_ring_4_3029614 00:04:05.562 size: 1.000366 MiB name: RG_ring_5_3029614 00:04:05.562 size: 0.125366 MiB name: RG_ring_2_3029614 00:04:05.562 size: 0.015991 MiB name: RG_ring_3_3029614 00:04:05.562 end memzones------- 00:04:05.562 18:46:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:05.562 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:05.562 list of free elements. size: 12.519348 MiB 00:04:05.562 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:05.562 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:05.562 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:05.562 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:05.562 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:05.562 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:05.562 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:05.562 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:05.562 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:05.562 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:05.562 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:05.562 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:05.562 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:05.562 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:05.562 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:05.562 list of standard malloc elements. size: 199.218079 MiB 00:04:05.562 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:05.562 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:05.562 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:05.562 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:05.562 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:05.562 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:05.562 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:05.562 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:05.562 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:05.562 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:05.562 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:05.562 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:05.562 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:05.562 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:05.562 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:05.562 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:05.562 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:05.562 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:05.562 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:05.562 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:05.562 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:05.562 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:05.562 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:05.562 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:05.562 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:05.562 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:05.562 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:05.562 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:05.562 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:05.563 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:05.563 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:05.563 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:05.563 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:05.563 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:05.563 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:05.563 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:05.563 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:05.563 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:05.563 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:05.563 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:05.563 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:05.563 list of memzone associated elements. size: 602.262573 MiB 00:04:05.563 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:05.563 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:05.563 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:05.563 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:05.563 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:05.563 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3029614_0 00:04:05.563 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:05.563 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3029614_0 00:04:05.563 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:05.563 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3029614_0 00:04:05.563 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:05.563 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:05.563 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:05.563 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:05.563 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:05.563 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3029614 00:04:05.563 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:05.563 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3029614 00:04:05.563 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:05.563 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3029614 00:04:05.563 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:05.563 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:05.563 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:05.563 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:05.563 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:05.563 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:05.563 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:05.563 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:05.563 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:05.563 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3029614 00:04:05.563 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:05.563 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3029614 00:04:05.563 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:05.563 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3029614 00:04:05.563 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:05.563 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3029614 00:04:05.563 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:05.563 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3029614 00:04:05.563 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:05.563 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:05.563 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:05.563 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:05.563 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:05.563 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:05.563 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:05.563 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3029614 00:04:05.563 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:05.563 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:05.563 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:05.563 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:05.563 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:05.563 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3029614 00:04:05.563 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:05.563 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:05.563 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:05.563 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3029614 00:04:05.563 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:05.563 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3029614 00:04:05.563 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:05.563 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:05.563 18:46:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:05.563 18:46:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3029614 00:04:05.563 18:46:43 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 3029614 ']' 00:04:05.563 18:46:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 3029614 00:04:05.563 18:46:43 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:05.563 18:46:43 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:05.563 18:46:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3029614 00:04:05.563 18:46:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:05.563 18:46:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:05.563 18:46:43 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3029614' 00:04:05.563 killing process with pid 3029614 00:04:05.563 18:46:43 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 3029614 00:04:05.563 18:46:43 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 3029614 00:04:06.128 00:04:06.128 real 0m1.143s 00:04:06.128 user 0m1.115s 00:04:06.128 sys 0m0.395s 00:04:06.128 18:46:43 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.128 18:46:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:06.128 ************************************ 00:04:06.128 END TEST dpdk_mem_utility 00:04:06.128 ************************************ 00:04:06.128 18:46:43 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:06.128 18:46:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.128 18:46:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.128 18:46:43 -- common/autotest_common.sh@10 -- # set +x 00:04:06.128 ************************************ 00:04:06.128 START TEST event 00:04:06.128 ************************************ 00:04:06.128 18:46:43 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:06.128 * Looking for test storage... 00:04:06.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:06.128 18:46:43 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:06.128 18:46:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:06.128 18:46:43 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:06.128 18:46:43 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:06.128 18:46:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.128 18:46:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:06.128 ************************************ 00:04:06.128 START TEST event_perf 00:04:06.128 ************************************ 00:04:06.128 18:46:43 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:06.128 Running I/O for 1 seconds...[2024-07-24 18:46:43.721792] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:06.128 [2024-07-24 18:46:43.721856] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3029804 ] 00:04:06.385 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.385 [2024-07-24 18:46:43.787703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:06.385 [2024-07-24 18:46:43.908698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:06.385 [2024-07-24 18:46:43.908764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:06.385 [2024-07-24 18:46:43.908857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:06.385 [2024-07-24 18:46:43.908861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.758 Running I/O for 1 seconds... 00:04:07.758 lcore 0: 237931 00:04:07.758 lcore 1: 237930 00:04:07.758 lcore 2: 237930 00:04:07.758 lcore 3: 237930 00:04:07.758 done. 00:04:07.758 00:04:07.758 real 0m1.323s 00:04:07.758 user 0m4.230s 00:04:07.758 sys 0m0.088s 00:04:07.758 18:46:45 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.758 18:46:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:07.758 ************************************ 00:04:07.758 END TEST event_perf 00:04:07.758 ************************************ 00:04:07.758 18:46:45 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:07.758 18:46:45 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:07.758 18:46:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.758 18:46:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:07.758 ************************************ 00:04:07.758 START TEST event_reactor 00:04:07.758 ************************************ 00:04:07.758 18:46:45 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:07.758 [2024-07-24 18:46:45.090168] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:07.758 [2024-07-24 18:46:45.090226] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3030068 ] 00:04:07.758 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.758 [2024-07-24 18:46:45.151116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.759 [2024-07-24 18:46:45.271949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.133 test_start 00:04:09.133 oneshot 00:04:09.133 tick 100 00:04:09.133 tick 100 00:04:09.133 tick 250 00:04:09.133 tick 100 00:04:09.133 tick 100 00:04:09.133 tick 100 00:04:09.133 tick 250 00:04:09.133 tick 500 00:04:09.133 tick 100 00:04:09.133 tick 100 00:04:09.133 tick 250 00:04:09.133 tick 100 00:04:09.133 tick 100 00:04:09.133 test_end 00:04:09.133 00:04:09.133 real 0m1.319s 00:04:09.133 user 0m1.225s 00:04:09.133 sys 0m0.090s 00:04:09.133 18:46:46 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.133 18:46:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:09.133 ************************************ 00:04:09.133 END TEST event_reactor 00:04:09.133 ************************************ 00:04:09.133 18:46:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:09.133 18:46:46 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:09.133 18:46:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.133 18:46:46 event -- common/autotest_common.sh@10 -- # set +x 00:04:09.133 ************************************ 00:04:09.133 START TEST event_reactor_perf 00:04:09.133 ************************************ 00:04:09.133 18:46:46 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:09.133 [2024-07-24 18:46:46.454765] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:09.133 [2024-07-24 18:46:46.454829] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3030240 ] 00:04:09.133 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.133 [2024-07-24 18:46:46.519889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.133 [2024-07-24 18:46:46.639981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.506 test_start 00:04:10.506 test_end 00:04:10.506 Performance: 357245 events per second 00:04:10.506 00:04:10.506 real 0m1.316s 00:04:10.506 user 0m1.232s 00:04:10.506 sys 0m0.080s 00:04:10.506 18:46:47 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.506 18:46:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:10.506 ************************************ 00:04:10.506 END TEST event_reactor_perf 00:04:10.506 ************************************ 00:04:10.506 18:46:47 event -- event/event.sh@49 -- # uname -s 00:04:10.506 18:46:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:10.506 18:46:47 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:10.506 18:46:47 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.506 18:46:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.506 18:46:47 event -- common/autotest_common.sh@10 -- # set +x 00:04:10.506 ************************************ 00:04:10.506 START TEST event_scheduler 00:04:10.506 ************************************ 00:04:10.506 18:46:47 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:10.506 * Looking for test storage... 00:04:10.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:10.506 18:46:47 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:10.506 18:46:47 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3030423 00:04:10.506 18:46:47 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:10.506 18:46:47 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.506 18:46:47 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3030423 00:04:10.506 18:46:47 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 3030423 ']' 00:04:10.506 18:46:47 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.506 18:46:47 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:10.506 18:46:47 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.507 18:46:47 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:10.507 18:46:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:10.507 [2024-07-24 18:46:47.899726] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:10.507 [2024-07-24 18:46:47.899800] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3030423 ] 00:04:10.507 EAL: No free 2048 kB hugepages reported on node 1 00:04:10.507 [2024-07-24 18:46:47.956057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:10.507 [2024-07-24 18:46:48.064862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.507 [2024-07-24 18:46:48.064919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.507 [2024-07-24 18:46:48.064984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:10.507 [2024-07-24 18:46:48.064987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:10.507 18:46:48 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:10.507 18:46:48 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:10.507 18:46:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:10.765 18:46:48 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.765 18:46:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:10.765 [2024-07-24 18:46:48.113749] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:10.765 [2024-07-24 18:46:48.113775] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:10.765 [2024-07-24 18:46:48.113807] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:10.765 [2024-07-24 18:46:48.113817] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:10.765 [2024-07-24 18:46:48.113827] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:10.765 18:46:48 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.765 18:46:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:10.765 18:46:48 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.765 18:46:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:10.765 [2024-07-24 18:46:48.211990] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:10.765 18:46:48 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.765 18:46:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:10.765 18:46:48 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.765 18:46:48 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.765 18:46:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:10.765 ************************************ 00:04:10.765 START TEST scheduler_create_thread 00:04:10.765 ************************************ 00:04:10.765 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:10.765 18:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:10.765 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.765 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.765 2 00:04:10.765 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.765 18:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:10.765 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.765 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.765 3 00:04:10.765 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.766 4 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.766 5 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.766 6 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.766 7 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.766 8 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.766 9 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.766 10 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.766 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.332 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.332 00:04:11.332 real 0m0.591s 00:04:11.332 user 0m0.009s 00:04:11.332 sys 0m0.004s 00:04:11.332 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:11.332 18:46:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.332 ************************************ 00:04:11.332 END TEST scheduler_create_thread 00:04:11.332 ************************************ 00:04:11.332 18:46:48 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:11.332 18:46:48 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3030423 00:04:11.332 18:46:48 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 3030423 ']' 00:04:11.332 18:46:48 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 3030423 00:04:11.332 18:46:48 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:11.332 18:46:48 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:11.332 18:46:48 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3030423 00:04:11.332 18:46:48 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:11.332 18:46:48 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:11.332 18:46:48 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3030423' 00:04:11.332 killing process with pid 3030423 00:04:11.332 18:46:48 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 3030423 00:04:11.332 18:46:48 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 3030423 00:04:11.897 [2024-07-24 18:46:49.312241] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:12.155 00:04:12.155 real 0m1.772s 00:04:12.155 user 0m2.254s 00:04:12.155 sys 0m0.316s 00:04:12.155 18:46:49 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.155 18:46:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:12.155 ************************************ 00:04:12.155 END TEST event_scheduler 00:04:12.155 ************************************ 00:04:12.155 18:46:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:12.155 18:46:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:12.155 18:46:49 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:12.155 18:46:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.155 18:46:49 event -- common/autotest_common.sh@10 -- # set +x 00:04:12.155 ************************************ 00:04:12.155 START TEST app_repeat 00:04:12.155 ************************************ 00:04:12.155 18:46:49 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:12.155 18:46:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.155 18:46:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.155 18:46:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:12.156 18:46:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.156 18:46:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:12.156 18:46:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:12.156 18:46:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:12.156 18:46:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3030734 00:04:12.156 18:46:49 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:12.156 18:46:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:12.156 18:46:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3030734' 00:04:12.156 Process app_repeat pid: 3030734 00:04:12.156 18:46:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:12.156 18:46:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:12.156 spdk_app_start Round 0 00:04:12.156 18:46:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3030734 /var/tmp/spdk-nbd.sock 00:04:12.156 18:46:49 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3030734 ']' 00:04:12.156 18:46:49 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:12.156 18:46:49 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:12.156 18:46:49 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:12.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:12.156 18:46:49 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:12.156 18:46:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:12.156 [2024-07-24 18:46:49.644097] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:12.156 [2024-07-24 18:46:49.644194] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3030734 ] 00:04:12.156 EAL: No free 2048 kB hugepages reported on node 1 00:04:12.156 [2024-07-24 18:46:49.701594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:12.413 [2024-07-24 18:46:49.814486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:12.413 [2024-07-24 18:46:49.814490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.413 18:46:49 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:12.413 18:46:49 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:12.413 18:46:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:12.671 Malloc0 00:04:12.671 18:46:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:12.930 Malloc1 00:04:12.930 18:46:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:12.930 18:46:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.930 18:46:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.930 18:46:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:12.930 18:46:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.930 18:46:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:12.930 18:46:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:12.930 18:46:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.930 18:46:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.930 18:46:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:12.930 18:46:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.930 18:46:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:12.930 18:46:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:12.930 18:46:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:12.930 18:46:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.930 18:46:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:13.188 /dev/nbd0 00:04:13.188 18:46:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:13.188 18:46:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:13.188 18:46:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:13.188 18:46:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:13.188 18:46:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:13.188 18:46:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:13.188 18:46:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:13.188 18:46:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:13.188 18:46:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:13.188 18:46:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:13.188 18:46:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:13.188 1+0 records in 00:04:13.188 1+0 records out 00:04:13.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232209 s, 17.6 MB/s 00:04:13.188 18:46:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:13.188 18:46:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:13.188 18:46:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:13.188 18:46:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:13.188 18:46:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:13.188 18:46:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:13.188 18:46:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:13.188 18:46:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:13.447 /dev/nbd1 00:04:13.447 18:46:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:13.447 18:46:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:13.447 18:46:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:13.447 18:46:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:13.447 18:46:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:13.447 18:46:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:13.447 18:46:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:13.447 18:46:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:13.447 18:46:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:13.447 18:46:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:13.447 18:46:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:13.447 1+0 records in 00:04:13.447 1+0 records out 00:04:13.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212335 s, 19.3 MB/s 00:04:13.447 18:46:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:13.447 18:46:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:13.447 18:46:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:13.447 18:46:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:13.447 18:46:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:13.447 18:46:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:13.447 18:46:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:13.447 18:46:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:13.447 18:46:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.447 18:46:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:13.705 18:46:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:13.705 { 00:04:13.705 "nbd_device": "/dev/nbd0", 00:04:13.705 "bdev_name": "Malloc0" 00:04:13.705 }, 00:04:13.705 { 00:04:13.705 "nbd_device": "/dev/nbd1", 00:04:13.705 "bdev_name": "Malloc1" 00:04:13.705 } 00:04:13.705 ]' 00:04:13.705 18:46:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:13.705 { 00:04:13.705 "nbd_device": "/dev/nbd0", 00:04:13.705 "bdev_name": "Malloc0" 00:04:13.705 }, 00:04:13.705 { 00:04:13.705 "nbd_device": "/dev/nbd1", 00:04:13.705 "bdev_name": "Malloc1" 00:04:13.705 } 00:04:13.705 ]' 00:04:13.705 18:46:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:13.963 18:46:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:13.963 /dev/nbd1' 00:04:13.963 18:46:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:13.963 /dev/nbd1' 00:04:13.963 18:46:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:13.963 18:46:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:13.963 18:46:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:13.963 18:46:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:13.963 18:46:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:13.963 18:46:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:13.963 18:46:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.963 18:46:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:13.963 18:46:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:13.963 18:46:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.963 18:46:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:13.963 18:46:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:13.963 256+0 records in 00:04:13.964 256+0 records out 00:04:13.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00530631 s, 198 MB/s 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:13.964 256+0 records in 00:04:13.964 256+0 records out 00:04:13.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244061 s, 43.0 MB/s 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:13.964 256+0 records in 00:04:13.964 256+0 records out 00:04:13.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227552 s, 46.1 MB/s 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:13.964 18:46:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:14.222 18:46:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:14.222 18:46:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:14.222 18:46:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:14.222 18:46:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:14.222 18:46:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:14.222 18:46:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:14.222 18:46:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:14.222 18:46:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:14.222 18:46:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:14.222 18:46:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:14.480 18:46:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:14.480 18:46:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:14.480 18:46:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:14.480 18:46:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:14.480 18:46:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:14.480 18:46:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:14.480 18:46:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:14.480 18:46:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:14.480 18:46:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:14.480 18:46:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.480 18:46:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:14.737 18:46:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:14.737 18:46:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:14.738 18:46:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:14.738 18:46:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:14.738 18:46:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:14.738 18:46:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:14.738 18:46:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:14.738 18:46:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:14.738 18:46:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:14.738 18:46:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:14.738 18:46:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:14.738 18:46:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:14.738 18:46:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:14.995 18:46:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:15.253 [2024-07-24 18:46:52.775210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:15.512 [2024-07-24 18:46:52.889873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.512 [2024-07-24 18:46:52.889874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.512 [2024-07-24 18:46:52.946365] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:15.512 [2024-07-24 18:46:52.946433] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:18.038 18:46:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:18.038 18:46:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:18.038 spdk_app_start Round 1 00:04:18.038 18:46:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3030734 /var/tmp/spdk-nbd.sock 00:04:18.038 18:46:55 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3030734 ']' 00:04:18.038 18:46:55 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:18.038 18:46:55 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:18.038 18:46:55 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:18.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:18.038 18:46:55 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:18.038 18:46:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:18.296 18:46:55 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:18.296 18:46:55 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:18.296 18:46:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:18.553 Malloc0 00:04:18.553 18:46:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:18.812 Malloc1 00:04:18.812 18:46:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:18.812 18:46:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.812 18:46:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.812 18:46:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:18.812 18:46:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.812 18:46:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:18.812 18:46:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:18.812 18:46:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.812 18:46:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.812 18:46:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:18.812 18:46:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.812 18:46:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:18.812 18:46:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:18.812 18:46:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:18.812 18:46:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.812 18:46:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:19.070 /dev/nbd0 00:04:19.070 18:46:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:19.070 18:46:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:19.070 18:46:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:19.070 18:46:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:19.070 18:46:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:19.070 18:46:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:19.070 18:46:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:19.070 18:46:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:19.070 18:46:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:19.070 18:46:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:19.070 18:46:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:19.070 1+0 records in 00:04:19.070 1+0 records out 00:04:19.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198856 s, 20.6 MB/s 00:04:19.070 18:46:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.070 18:46:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:19.070 18:46:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.070 18:46:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:19.070 18:46:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:19.070 18:46:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.070 18:46:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.070 18:46:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:19.327 /dev/nbd1 00:04:19.327 18:46:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:19.327 18:46:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:19.328 18:46:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:19.328 18:46:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:19.328 18:46:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:19.328 18:46:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:19.328 18:46:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:19.328 18:46:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:19.328 18:46:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:19.328 18:46:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:19.328 18:46:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:19.328 1+0 records in 00:04:19.328 1+0 records out 00:04:19.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197186 s, 20.8 MB/s 00:04:19.328 18:46:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.328 18:46:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:19.328 18:46:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.328 18:46:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:19.328 18:46:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:19.328 18:46:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.328 18:46:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.328 18:46:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:19.328 18:46:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.328 18:46:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:19.586 { 00:04:19.586 "nbd_device": "/dev/nbd0", 00:04:19.586 "bdev_name": "Malloc0" 00:04:19.586 }, 00:04:19.586 { 00:04:19.586 "nbd_device": "/dev/nbd1", 00:04:19.586 "bdev_name": "Malloc1" 00:04:19.586 } 00:04:19.586 ]' 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:19.586 { 00:04:19.586 "nbd_device": "/dev/nbd0", 00:04:19.586 "bdev_name": "Malloc0" 00:04:19.586 }, 00:04:19.586 { 00:04:19.586 "nbd_device": "/dev/nbd1", 00:04:19.586 "bdev_name": "Malloc1" 00:04:19.586 } 00:04:19.586 ]' 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:19.586 /dev/nbd1' 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:19.586 /dev/nbd1' 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:19.586 256+0 records in 00:04:19.586 256+0 records out 00:04:19.586 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501846 s, 209 MB/s 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:19.586 256+0 records in 00:04:19.586 256+0 records out 00:04:19.586 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020921 s, 50.1 MB/s 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:19.586 256+0 records in 00:04:19.586 256+0 records out 00:04:19.586 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247198 s, 42.4 MB/s 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:19.586 18:46:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:19.844 18:46:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:19.844 18:46:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:19.844 18:46:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:19.844 18:46:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:19.844 18:46:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:19.844 18:46:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:19.844 18:46:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:19.844 18:46:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:19.844 18:46:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:19.844 18:46:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:20.413 18:46:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:20.413 18:46:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:20.413 18:46:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:20.413 18:46:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.413 18:46:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.413 18:46:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:20.413 18:46:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:20.413 18:46:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.413 18:46:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:20.414 18:46:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.414 18:46:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:20.414 18:46:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:20.414 18:46:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:20.414 18:46:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:20.414 18:46:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:20.414 18:46:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:20.414 18:46:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:20.414 18:46:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:20.414 18:46:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:20.414 18:46:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:20.414 18:46:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:20.414 18:46:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:20.414 18:46:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:20.414 18:46:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:20.980 18:46:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:20.980 [2024-07-24 18:46:58.554794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:21.238 [2024-07-24 18:46:58.673202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.238 [2024-07-24 18:46:58.673207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.238 [2024-07-24 18:46:58.738524] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:21.238 [2024-07-24 18:46:58.738607] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:23.764 18:47:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:23.764 18:47:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:23.764 spdk_app_start Round 2 00:04:23.764 18:47:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3030734 /var/tmp/spdk-nbd.sock 00:04:23.764 18:47:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3030734 ']' 00:04:23.764 18:47:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:23.764 18:47:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:23.764 18:47:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:23.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:23.764 18:47:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:23.764 18:47:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:24.022 18:47:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:24.022 18:47:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:24.022 18:47:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.280 Malloc0 00:04:24.280 18:47:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.538 Malloc1 00:04:24.538 18:47:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.538 18:47:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.538 18:47:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.538 18:47:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:24.538 18:47:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.538 18:47:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:24.538 18:47:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.538 18:47:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.538 18:47:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.538 18:47:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:24.538 18:47:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.538 18:47:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:24.538 18:47:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:24.538 18:47:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:24.538 18:47:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.538 18:47:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:24.797 /dev/nbd0 00:04:24.797 18:47:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:24.797 18:47:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:24.797 18:47:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:24.797 18:47:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:24.797 18:47:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:24.797 18:47:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:24.797 18:47:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:24.797 18:47:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:24.797 18:47:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:24.797 18:47:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:24.797 18:47:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.797 1+0 records in 00:04:24.797 1+0 records out 00:04:24.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187029 s, 21.9 MB/s 00:04:24.797 18:47:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.797 18:47:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:24.797 18:47:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.797 18:47:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:24.797 18:47:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:24.797 18:47:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.797 18:47:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.797 18:47:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:25.053 /dev/nbd1 00:04:25.053 18:47:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:25.053 18:47:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:25.053 18:47:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:25.053 18:47:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:25.053 18:47:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:25.053 18:47:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:25.053 18:47:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:25.053 18:47:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:25.053 18:47:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:25.053 18:47:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:25.053 18:47:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.053 1+0 records in 00:04:25.053 1+0 records out 00:04:25.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207658 s, 19.7 MB/s 00:04:25.053 18:47:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.053 18:47:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:25.053 18:47:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.053 18:47:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:25.053 18:47:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:25.053 18:47:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.053 18:47:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.053 18:47:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.053 18:47:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.053 18:47:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.310 18:47:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:25.310 { 00:04:25.310 "nbd_device": "/dev/nbd0", 00:04:25.310 "bdev_name": "Malloc0" 00:04:25.310 }, 00:04:25.310 { 00:04:25.310 "nbd_device": "/dev/nbd1", 00:04:25.310 "bdev_name": "Malloc1" 00:04:25.310 } 00:04:25.310 ]' 00:04:25.310 18:47:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:25.310 { 00:04:25.310 "nbd_device": "/dev/nbd0", 00:04:25.310 "bdev_name": "Malloc0" 00:04:25.310 }, 00:04:25.310 { 00:04:25.310 "nbd_device": "/dev/nbd1", 00:04:25.310 "bdev_name": "Malloc1" 00:04:25.310 } 00:04:25.310 ]' 00:04:25.310 18:47:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.310 18:47:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:25.310 /dev/nbd1' 00:04:25.310 18:47:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:25.310 /dev/nbd1' 00:04:25.310 18:47:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.310 18:47:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:25.310 18:47:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:25.310 18:47:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:25.310 18:47:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:25.310 18:47:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:25.310 18:47:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.310 18:47:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.310 18:47:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:25.310 18:47:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.310 18:47:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:25.310 18:47:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:25.310 256+0 records in 00:04:25.310 256+0 records out 00:04:25.310 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00376176 s, 279 MB/s 00:04:25.310 18:47:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.310 18:47:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:25.568 256+0 records in 00:04:25.568 256+0 records out 00:04:25.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245993 s, 42.6 MB/s 00:04:25.568 18:47:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.568 18:47:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:25.568 256+0 records in 00:04:25.568 256+0 records out 00:04:25.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260022 s, 40.3 MB/s 00:04:25.568 18:47:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:25.568 18:47:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.568 18:47:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.568 18:47:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:25.568 18:47:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.568 18:47:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:25.568 18:47:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:25.568 18:47:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.568 18:47:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:25.568 18:47:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.568 18:47:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:25.568 18:47:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.568 18:47:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:25.568 18:47:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.568 18:47:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.568 18:47:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:25.568 18:47:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:25.568 18:47:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.568 18:47:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:25.825 18:47:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:25.826 18:47:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:25.826 18:47:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:25.826 18:47:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.826 18:47:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.826 18:47:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:25.826 18:47:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.826 18:47:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.826 18:47:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.826 18:47:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:26.082 18:47:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:26.082 18:47:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:26.082 18:47:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:26.082 18:47:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:26.082 18:47:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:26.082 18:47:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:26.082 18:47:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:26.082 18:47:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:26.082 18:47:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:26.082 18:47:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.082 18:47:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:26.340 18:47:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:26.340 18:47:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:26.340 18:47:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.340 18:47:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:26.340 18:47:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:26.340 18:47:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.340 18:47:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:26.340 18:47:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:26.340 18:47:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:26.340 18:47:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:26.340 18:47:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:26.340 18:47:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:26.340 18:47:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:26.598 18:47:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:26.856 [2024-07-24 18:47:04.308972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.856 [2024-07-24 18:47:04.425448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.856 [2024-07-24 18:47:04.425449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.113 [2024-07-24 18:47:04.490968] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:27.114 [2024-07-24 18:47:04.491040] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:29.635 18:47:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3030734 /var/tmp/spdk-nbd.sock 00:04:29.635 18:47:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3030734 ']' 00:04:29.635 18:47:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:29.635 18:47:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:29.636 18:47:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:29.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:29.636 18:47:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:29.636 18:47:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:29.892 18:47:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:29.892 18:47:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:29.892 18:47:07 event.app_repeat -- event/event.sh@39 -- # killprocess 3030734 00:04:29.892 18:47:07 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 3030734 ']' 00:04:29.892 18:47:07 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 3030734 00:04:29.892 18:47:07 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:04:29.892 18:47:07 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:29.892 18:47:07 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3030734 00:04:29.892 18:47:07 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:29.892 18:47:07 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:29.892 18:47:07 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3030734' 00:04:29.892 killing process with pid 3030734 00:04:29.892 18:47:07 event.app_repeat -- common/autotest_common.sh@969 -- # kill 3030734 00:04:29.892 18:47:07 event.app_repeat -- common/autotest_common.sh@974 -- # wait 3030734 00:04:30.149 spdk_app_start is called in Round 0. 00:04:30.149 Shutdown signal received, stop current app iteration 00:04:30.149 Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 reinitialization... 00:04:30.149 spdk_app_start is called in Round 1. 00:04:30.149 Shutdown signal received, stop current app iteration 00:04:30.149 Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 reinitialization... 00:04:30.149 spdk_app_start is called in Round 2. 00:04:30.149 Shutdown signal received, stop current app iteration 00:04:30.149 Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 reinitialization... 00:04:30.149 spdk_app_start is called in Round 3. 00:04:30.149 Shutdown signal received, stop current app iteration 00:04:30.149 18:47:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:30.149 18:47:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:30.149 00:04:30.149 real 0m17.946s 00:04:30.149 user 0m38.703s 00:04:30.149 sys 0m3.243s 00:04:30.149 18:47:07 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.149 18:47:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:30.149 ************************************ 00:04:30.149 END TEST app_repeat 00:04:30.149 ************************************ 00:04:30.149 18:47:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:30.149 18:47:07 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:30.149 18:47:07 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.149 18:47:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.149 18:47:07 event -- common/autotest_common.sh@10 -- # set +x 00:04:30.149 ************************************ 00:04:30.149 START TEST cpu_locks 00:04:30.149 ************************************ 00:04:30.149 18:47:07 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:30.149 * Looking for test storage... 00:04:30.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:30.149 18:47:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:30.149 18:47:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:30.149 18:47:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:30.149 18:47:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:30.149 18:47:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.149 18:47:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.149 18:47:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:30.149 ************************************ 00:04:30.149 START TEST default_locks 00:04:30.149 ************************************ 00:04:30.149 18:47:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:04:30.149 18:47:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3033085 00:04:30.149 18:47:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.149 18:47:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3033085 00:04:30.149 18:47:07 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3033085 ']' 00:04:30.149 18:47:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.149 18:47:07 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:30.149 18:47:07 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.149 18:47:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:30.149 18:47:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:30.149 [2024-07-24 18:47:07.739133] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:30.149 [2024-07-24 18:47:07.739214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3033085 ] 00:04:30.406 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.406 [2024-07-24 18:47:07.795497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.406 [2024-07-24 18:47:07.904414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.663 18:47:08 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:30.663 18:47:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:04:30.663 18:47:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3033085 00:04:30.663 18:47:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3033085 00:04:30.663 18:47:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:31.228 lslocks: write error 00:04:31.228 18:47:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3033085 00:04:31.228 18:47:08 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 3033085 ']' 00:04:31.228 18:47:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 3033085 00:04:31.228 18:47:08 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:04:31.228 18:47:08 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:31.228 18:47:08 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3033085 00:04:31.228 18:47:08 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:31.228 18:47:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:31.228 18:47:08 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3033085' 00:04:31.228 killing process with pid 3033085 00:04:31.228 18:47:08 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 3033085 00:04:31.228 18:47:08 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 3033085 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3033085 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3033085 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3033085 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3033085 ']' 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3033085) - No such process 00:04:31.488 ERROR: process (pid: 3033085) is no longer running 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:31.488 00:04:31.488 real 0m1.380s 00:04:31.488 user 0m1.288s 00:04:31.488 sys 0m0.575s 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.488 18:47:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.488 ************************************ 00:04:31.488 END TEST default_locks 00:04:31.488 ************************************ 00:04:31.746 18:47:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:31.746 18:47:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.746 18:47:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.746 18:47:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.746 ************************************ 00:04:31.746 START TEST default_locks_via_rpc 00:04:31.746 ************************************ 00:04:31.746 18:47:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:04:31.746 18:47:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3033247 00:04:31.746 18:47:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.746 18:47:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3033247 00:04:31.746 18:47:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3033247 ']' 00:04:31.746 18:47:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.746 18:47:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:31.746 18:47:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.746 18:47:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:31.746 18:47:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.746 [2024-07-24 18:47:09.171380] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:31.746 [2024-07-24 18:47:09.171482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3033247 ] 00:04:31.746 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.746 [2024-07-24 18:47:09.232911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.747 [2024-07-24 18:47:09.346750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.680 18:47:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:32.680 18:47:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:32.680 18:47:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:32.680 18:47:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.680 18:47:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.680 18:47:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.680 18:47:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:32.680 18:47:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:32.680 18:47:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:32.680 18:47:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:32.680 18:47:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:32.680 18:47:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.680 18:47:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.680 18:47:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.680 18:47:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3033247 00:04:32.680 18:47:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3033247 00:04:32.680 18:47:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:32.937 18:47:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3033247 00:04:32.937 18:47:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 3033247 ']' 00:04:32.937 18:47:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 3033247 00:04:32.937 18:47:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:04:32.937 18:47:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:32.937 18:47:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3033247 00:04:32.938 18:47:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:32.938 18:47:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:32.938 18:47:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3033247' 00:04:32.938 killing process with pid 3033247 00:04:32.938 18:47:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 3033247 00:04:32.938 18:47:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 3033247 00:04:33.503 00:04:33.503 real 0m1.812s 00:04:33.503 user 0m1.929s 00:04:33.503 sys 0m0.575s 00:04:33.503 18:47:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.503 18:47:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.503 ************************************ 00:04:33.503 END TEST default_locks_via_rpc 00:04:33.503 ************************************ 00:04:33.503 18:47:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:33.503 18:47:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.503 18:47:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.503 18:47:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:33.503 ************************************ 00:04:33.503 START TEST non_locking_app_on_locked_coremask 00:04:33.503 ************************************ 00:04:33.503 18:47:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:04:33.503 18:47:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3033542 00:04:33.503 18:47:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:33.503 18:47:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3033542 /var/tmp/spdk.sock 00:04:33.503 18:47:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3033542 ']' 00:04:33.503 18:47:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.503 18:47:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:33.503 18:47:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.503 18:47:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:33.503 18:47:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:33.503 [2024-07-24 18:47:11.033307] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:33.503 [2024-07-24 18:47:11.033389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3033542 ] 00:04:33.503 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.503 [2024-07-24 18:47:11.094248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.761 [2024-07-24 18:47:11.210899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.695 18:47:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:34.695 18:47:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:34.695 18:47:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3033643 00:04:34.695 18:47:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:34.695 18:47:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3033643 /var/tmp/spdk2.sock 00:04:34.695 18:47:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3033643 ']' 00:04:34.695 18:47:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:34.695 18:47:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:34.695 18:47:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:34.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:34.695 18:47:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:34.695 18:47:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:34.695 [2024-07-24 18:47:12.051143] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:34.695 [2024-07-24 18:47:12.051232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3033643 ] 00:04:34.695 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.695 [2024-07-24 18:47:12.148020] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:34.695 [2024-07-24 18:47:12.148065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.953 [2024-07-24 18:47:12.387340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.519 18:47:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.519 18:47:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:35.519 18:47:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3033542 00:04:35.519 18:47:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3033542 00:04:35.519 18:47:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:36.083 lslocks: write error 00:04:36.083 18:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3033542 00:04:36.083 18:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3033542 ']' 00:04:36.083 18:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3033542 00:04:36.083 18:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:36.083 18:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:36.083 18:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3033542 00:04:36.083 18:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:36.083 18:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:36.083 18:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3033542' 00:04:36.083 killing process with pid 3033542 00:04:36.083 18:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3033542 00:04:36.083 18:47:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3033542 00:04:37.015 18:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3033643 00:04:37.015 18:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3033643 ']' 00:04:37.015 18:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3033643 00:04:37.015 18:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:37.015 18:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:37.015 18:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3033643 00:04:37.015 18:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:37.015 18:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:37.015 18:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3033643' 00:04:37.015 killing process with pid 3033643 00:04:37.015 18:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3033643 00:04:37.015 18:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3033643 00:04:37.581 00:04:37.581 real 0m3.916s 00:04:37.581 user 0m4.252s 00:04:37.581 sys 0m1.100s 00:04:37.581 18:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.581 18:47:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.581 ************************************ 00:04:37.581 END TEST non_locking_app_on_locked_coremask 00:04:37.581 ************************************ 00:04:37.581 18:47:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:37.581 18:47:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.581 18:47:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.581 18:47:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:37.581 ************************************ 00:04:37.581 START TEST locking_app_on_unlocked_coremask 00:04:37.581 ************************************ 00:04:37.581 18:47:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:04:37.581 18:47:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3033985 00:04:37.581 18:47:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:37.581 18:47:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3033985 /var/tmp/spdk.sock 00:04:37.581 18:47:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3033985 ']' 00:04:37.581 18:47:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.581 18:47:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:37.581 18:47:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.581 18:47:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:37.581 18:47:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.581 [2024-07-24 18:47:14.999649] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:37.581 [2024-07-24 18:47:14.999739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3033985 ] 00:04:37.581 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.581 [2024-07-24 18:47:15.060128] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:37.581 [2024-07-24 18:47:15.060181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.581 [2024-07-24 18:47:15.182715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.144 18:47:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:38.144 18:47:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:38.144 18:47:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3034112 00:04:38.144 18:47:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:38.144 18:47:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3034112 /var/tmp/spdk2.sock 00:04:38.144 18:47:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3034112 ']' 00:04:38.144 18:47:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:38.144 18:47:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.144 18:47:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:38.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:38.144 18:47:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.144 18:47:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.144 [2024-07-24 18:47:15.498269] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:38.144 [2024-07-24 18:47:15.498354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3034112 ] 00:04:38.144 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.144 [2024-07-24 18:47:15.588002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.401 [2024-07-24 18:47:15.832340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.990 18:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:38.990 18:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:38.990 18:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3034112 00:04:38.990 18:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3034112 00:04:38.990 18:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:39.248 lslocks: write error 00:04:39.248 18:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3033985 00:04:39.248 18:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3033985 ']' 00:04:39.248 18:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3033985 00:04:39.248 18:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:39.248 18:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:39.248 18:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3033985 00:04:39.248 18:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:39.248 18:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:39.248 18:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3033985' 00:04:39.248 killing process with pid 3033985 00:04:39.248 18:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3033985 00:04:39.248 18:47:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3033985 00:04:40.181 18:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3034112 00:04:40.181 18:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3034112 ']' 00:04:40.181 18:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3034112 00:04:40.181 18:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:40.181 18:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:40.181 18:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3034112 00:04:40.438 18:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:40.438 18:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:40.438 18:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3034112' 00:04:40.438 killing process with pid 3034112 00:04:40.438 18:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3034112 00:04:40.438 18:47:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3034112 00:04:40.695 00:04:40.695 real 0m3.335s 00:04:40.695 user 0m3.506s 00:04:40.695 sys 0m1.013s 00:04:40.695 18:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.695 18:47:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.695 ************************************ 00:04:40.695 END TEST locking_app_on_unlocked_coremask 00:04:40.695 ************************************ 00:04:40.964 18:47:18 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:40.964 18:47:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.964 18:47:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.964 18:47:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:40.964 ************************************ 00:04:40.964 START TEST locking_app_on_locked_coremask 00:04:40.964 ************************************ 00:04:40.964 18:47:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:04:40.964 18:47:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3034421 00:04:40.964 18:47:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.964 18:47:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3034421 /var/tmp/spdk.sock 00:04:40.964 18:47:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3034421 ']' 00:04:40.964 18:47:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.964 18:47:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:40.964 18:47:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.964 18:47:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:40.964 18:47:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.964 [2024-07-24 18:47:18.385919] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:40.964 [2024-07-24 18:47:18.386014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3034421 ] 00:04:40.964 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.964 [2024-07-24 18:47:18.446614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.964 [2024-07-24 18:47:18.561602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.899 18:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:41.899 18:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:41.899 18:47:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3034559 00:04:41.899 18:47:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:41.899 18:47:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3034559 /var/tmp/spdk2.sock 00:04:41.899 18:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:41.899 18:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3034559 /var/tmp/spdk2.sock 00:04:41.899 18:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:41.899 18:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.899 18:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:41.899 18:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.899 18:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3034559 /var/tmp/spdk2.sock 00:04:41.899 18:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3034559 ']' 00:04:41.899 18:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:41.899 18:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:41.899 18:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:41.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:41.899 18:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:41.899 18:47:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.899 [2024-07-24 18:47:19.362265] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:41.899 [2024-07-24 18:47:19.362350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3034559 ] 00:04:41.899 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.899 [2024-07-24 18:47:19.458989] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3034421 has claimed it. 00:04:41.899 [2024-07-24 18:47:19.459048] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:42.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3034559) - No such process 00:04:42.465 ERROR: process (pid: 3034559) is no longer running 00:04:42.465 18:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:42.465 18:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:42.465 18:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:42.465 18:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:42.465 18:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:42.465 18:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:42.465 18:47:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3034421 00:04:42.465 18:47:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3034421 00:04:42.465 18:47:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:43.029 lslocks: write error 00:04:43.029 18:47:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3034421 00:04:43.029 18:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3034421 ']' 00:04:43.029 18:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3034421 00:04:43.029 18:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:43.029 18:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:43.029 18:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3034421 00:04:43.029 18:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:43.029 18:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:43.029 18:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3034421' 00:04:43.029 killing process with pid 3034421 00:04:43.029 18:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3034421 00:04:43.029 18:47:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3034421 00:04:43.592 00:04:43.592 real 0m2.699s 00:04:43.592 user 0m3.021s 00:04:43.592 sys 0m0.703s 00:04:43.592 18:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.592 18:47:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.592 ************************************ 00:04:43.592 END TEST locking_app_on_locked_coremask 00:04:43.592 ************************************ 00:04:43.592 18:47:21 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:43.592 18:47:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.592 18:47:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.592 18:47:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.592 ************************************ 00:04:43.592 START TEST locking_overlapped_coremask 00:04:43.592 ************************************ 00:04:43.592 18:47:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:04:43.592 18:47:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3034850 00:04:43.592 18:47:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:43.592 18:47:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3034850 /var/tmp/spdk.sock 00:04:43.592 18:47:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3034850 ']' 00:04:43.592 18:47:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.592 18:47:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:43.592 18:47:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.592 18:47:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:43.592 18:47:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.592 [2024-07-24 18:47:21.123659] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:43.592 [2024-07-24 18:47:21.123739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3034850 ] 00:04:43.592 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.592 [2024-07-24 18:47:21.184245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:43.849 [2024-07-24 18:47:21.302862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.849 [2024-07-24 18:47:21.302929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.849 [2024-07-24 18:47:21.302931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.787 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:44.787 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:44.787 18:47:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3034879 00:04:44.787 18:47:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3034879 /var/tmp/spdk2.sock 00:04:44.787 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:44.787 18:47:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:44.787 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3034879 /var/tmp/spdk2.sock 00:04:44.787 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:44.787 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:44.787 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:44.787 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:44.787 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3034879 /var/tmp/spdk2.sock 00:04:44.787 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3034879 ']' 00:04:44.787 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:44.787 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:44.787 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:44.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:44.787 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:44.787 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.787 [2024-07-24 18:47:22.101770] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:44.787 [2024-07-24 18:47:22.101884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3034879 ] 00:04:44.787 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.787 [2024-07-24 18:47:22.196279] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3034850 has claimed it. 00:04:44.787 [2024-07-24 18:47:22.196344] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:45.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3034879) - No such process 00:04:45.352 ERROR: process (pid: 3034879) is no longer running 00:04:45.353 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:45.353 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:45.353 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:45.353 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:45.353 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:45.353 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:45.353 18:47:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:45.353 18:47:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:45.353 18:47:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:45.353 18:47:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:45.353 18:47:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3034850 00:04:45.353 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 3034850 ']' 00:04:45.353 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 3034850 00:04:45.353 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:04:45.353 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:45.353 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3034850 00:04:45.353 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:45.353 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:45.353 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3034850' 00:04:45.353 killing process with pid 3034850 00:04:45.353 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 3034850 00:04:45.353 18:47:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 3034850 00:04:45.925 00:04:45.925 real 0m2.223s 00:04:45.925 user 0m6.213s 00:04:45.925 sys 0m0.528s 00:04:45.925 18:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.925 18:47:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.925 ************************************ 00:04:45.925 END TEST locking_overlapped_coremask 00:04:45.925 ************************************ 00:04:45.925 18:47:23 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:45.925 18:47:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.925 18:47:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.925 18:47:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:45.925 ************************************ 00:04:45.925 START TEST locking_overlapped_coremask_via_rpc 00:04:45.925 ************************************ 00:04:45.925 18:47:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:04:45.925 18:47:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3035153 00:04:45.925 18:47:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:45.925 18:47:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3035153 /var/tmp/spdk.sock 00:04:45.925 18:47:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3035153 ']' 00:04:45.925 18:47:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.925 18:47:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:45.925 18:47:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.925 18:47:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:45.925 18:47:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.925 [2024-07-24 18:47:23.391690] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:45.925 [2024-07-24 18:47:23.391770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035153 ] 00:04:45.925 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.925 [2024-07-24 18:47:23.452230] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:45.925 [2024-07-24 18:47:23.452264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:46.184 [2024-07-24 18:47:23.572045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.184 [2024-07-24 18:47:23.572093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.184 [2024-07-24 18:47:23.572096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.759 18:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:46.759 18:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:46.759 18:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3035204 00:04:46.759 18:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3035204 /var/tmp/spdk2.sock 00:04:46.759 18:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3035204 ']' 00:04:46.759 18:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:46.759 18:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:46.759 18:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:46.759 18:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:46.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:46.759 18:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:46.759 18:47:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.018 [2024-07-24 18:47:24.373955] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:47.018 [2024-07-24 18:47:24.374046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035204 ] 00:04:47.018 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.018 [2024-07-24 18:47:24.464950] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:47.018 [2024-07-24 18:47:24.464982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:47.275 [2024-07-24 18:47:24.694814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:47.275 [2024-07-24 18:47:24.694878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:04:47.275 [2024-07-24 18:47:24.694880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.842 [2024-07-24 18:47:25.326208] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3035153 has claimed it. 00:04:47.842 request: 00:04:47.842 { 00:04:47.842 "method": "framework_enable_cpumask_locks", 00:04:47.842 "req_id": 1 00:04:47.842 } 00:04:47.842 Got JSON-RPC error response 00:04:47.842 response: 00:04:47.842 { 00:04:47.842 "code": -32603, 00:04:47.842 "message": "Failed to claim CPU core: 2" 00:04:47.842 } 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3035153 /var/tmp/spdk.sock 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3035153 ']' 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:47.842 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.100 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.100 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:48.100 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3035204 /var/tmp/spdk2.sock 00:04:48.100 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3035204 ']' 00:04:48.100 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:48.100 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.100 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:48.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:48.100 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.100 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.359 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.359 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:48.359 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:48.359 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:48.359 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:48.359 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:48.359 00:04:48.359 real 0m2.506s 00:04:48.359 user 0m1.222s 00:04:48.359 sys 0m0.208s 00:04:48.359 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.359 18:47:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.359 ************************************ 00:04:48.359 END TEST locking_overlapped_coremask_via_rpc 00:04:48.359 ************************************ 00:04:48.359 18:47:25 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:48.359 18:47:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3035153 ]] 00:04:48.359 18:47:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3035153 00:04:48.359 18:47:25 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3035153 ']' 00:04:48.359 18:47:25 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3035153 00:04:48.359 18:47:25 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:04:48.359 18:47:25 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:48.359 18:47:25 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3035153 00:04:48.359 18:47:25 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:48.359 18:47:25 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:48.359 18:47:25 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3035153' 00:04:48.359 killing process with pid 3035153 00:04:48.359 18:47:25 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3035153 00:04:48.359 18:47:25 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3035153 00:04:48.925 18:47:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3035204 ]] 00:04:48.925 18:47:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3035204 00:04:48.925 18:47:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3035204 ']' 00:04:48.925 18:47:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3035204 00:04:48.925 18:47:26 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:04:48.925 18:47:26 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:48.925 18:47:26 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3035204 00:04:48.925 18:47:26 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:48.925 18:47:26 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:48.925 18:47:26 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3035204' 00:04:48.925 killing process with pid 3035204 00:04:48.925 18:47:26 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3035204 00:04:48.925 18:47:26 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3035204 00:04:49.491 18:47:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:49.491 18:47:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:49.491 18:47:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3035153 ]] 00:04:49.491 18:47:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3035153 00:04:49.491 18:47:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3035153 ']' 00:04:49.491 18:47:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3035153 00:04:49.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3035153) - No such process 00:04:49.491 18:47:26 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3035153 is not found' 00:04:49.491 Process with pid 3035153 is not found 00:04:49.491 18:47:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3035204 ]] 00:04:49.491 18:47:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3035204 00:04:49.491 18:47:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3035204 ']' 00:04:49.491 18:47:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3035204 00:04:49.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3035204) - No such process 00:04:49.492 18:47:26 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3035204 is not found' 00:04:49.492 Process with pid 3035204 is not found 00:04:49.492 18:47:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:49.492 00:04:49.492 real 0m19.239s 00:04:49.492 user 0m34.031s 00:04:49.492 sys 0m5.601s 00:04:49.492 18:47:26 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.492 18:47:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.492 ************************************ 00:04:49.492 END TEST cpu_locks 00:04:49.492 ************************************ 00:04:49.492 00:04:49.492 real 0m43.242s 00:04:49.492 user 1m21.791s 00:04:49.492 sys 0m9.646s 00:04:49.492 18:47:26 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.492 18:47:26 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.492 ************************************ 00:04:49.492 END TEST event 00:04:49.492 ************************************ 00:04:49.492 18:47:26 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:49.492 18:47:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.492 18:47:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.492 18:47:26 -- common/autotest_common.sh@10 -- # set +x 00:04:49.492 ************************************ 00:04:49.492 START TEST thread 00:04:49.492 ************************************ 00:04:49.492 18:47:26 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:49.492 * Looking for test storage... 00:04:49.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:49.492 18:47:26 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:49.492 18:47:26 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:04:49.492 18:47:26 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.492 18:47:26 thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.492 ************************************ 00:04:49.492 START TEST thread_poller_perf 00:04:49.492 ************************************ 00:04:49.492 18:47:26 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:49.492 [2024-07-24 18:47:27.008239] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:49.492 [2024-07-24 18:47:27.008309] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035662 ] 00:04:49.492 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.492 [2024-07-24 18:47:27.072209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.750 [2024-07-24 18:47:27.194794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.750 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:51.123 ====================================== 00:04:51.123 busy:2708549776 (cyc) 00:04:51.123 total_run_count: 296000 00:04:51.123 tsc_hz: 2700000000 (cyc) 00:04:51.123 ====================================== 00:04:51.123 poller_cost: 9150 (cyc), 3388 (nsec) 00:04:51.123 00:04:51.123 real 0m1.331s 00:04:51.123 user 0m1.242s 00:04:51.123 sys 0m0.084s 00:04:51.123 18:47:28 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.123 18:47:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:51.123 ************************************ 00:04:51.123 END TEST thread_poller_perf 00:04:51.123 ************************************ 00:04:51.123 18:47:28 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:51.123 18:47:28 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:04:51.123 18:47:28 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.123 18:47:28 thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.123 ************************************ 00:04:51.123 START TEST thread_poller_perf 00:04:51.123 ************************************ 00:04:51.123 18:47:28 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:51.123 [2024-07-24 18:47:28.389784] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:51.123 [2024-07-24 18:47:28.389852] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035816 ] 00:04:51.123 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.123 [2024-07-24 18:47:28.453729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.123 [2024-07-24 18:47:28.570692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.123 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:52.497 ====================================== 00:04:52.497 busy:2703086904 (cyc) 00:04:52.497 total_run_count: 3941000 00:04:52.497 tsc_hz: 2700000000 (cyc) 00:04:52.497 ====================================== 00:04:52.497 poller_cost: 685 (cyc), 253 (nsec) 00:04:52.497 00:04:52.497 real 0m1.319s 00:04:52.497 user 0m1.231s 00:04:52.497 sys 0m0.083s 00:04:52.497 18:47:29 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.497 18:47:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:52.497 ************************************ 00:04:52.497 END TEST thread_poller_perf 00:04:52.497 ************************************ 00:04:52.497 18:47:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:52.497 00:04:52.497 real 0m2.801s 00:04:52.497 user 0m2.528s 00:04:52.497 sys 0m0.273s 00:04:52.497 18:47:29 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.497 18:47:29 thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.497 ************************************ 00:04:52.497 END TEST thread 00:04:52.497 ************************************ 00:04:52.497 18:47:29 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:04:52.497 18:47:29 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:52.497 18:47:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.497 18:47:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.497 18:47:29 -- common/autotest_common.sh@10 -- # set +x 00:04:52.497 ************************************ 00:04:52.497 START TEST app_cmdline 00:04:52.497 ************************************ 00:04:52.497 18:47:29 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:52.497 * Looking for test storage... 00:04:52.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:52.497 18:47:29 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:52.497 18:47:29 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3036007 00:04:52.497 18:47:29 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:52.497 18:47:29 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3036007 00:04:52.497 18:47:29 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 3036007 ']' 00:04:52.497 18:47:29 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.497 18:47:29 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:52.497 18:47:29 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.497 18:47:29 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:52.497 18:47:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:52.497 [2024-07-24 18:47:29.873161] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:52.497 [2024-07-24 18:47:29.873253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3036007 ] 00:04:52.497 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.497 [2024-07-24 18:47:29.932247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.497 [2024-07-24 18:47:30.044202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.755 18:47:30 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.755 18:47:30 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:04:52.755 18:47:30 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:53.013 { 00:04:53.013 "version": "SPDK v24.09-pre git sha1 0a6bb28fa", 00:04:53.013 "fields": { 00:04:53.013 "major": 24, 00:04:53.013 "minor": 9, 00:04:53.013 "patch": 0, 00:04:53.013 "suffix": "-pre", 00:04:53.013 "commit": "0a6bb28fa" 00:04:53.013 } 00:04:53.013 } 00:04:53.013 18:47:30 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:53.013 18:47:30 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:53.013 18:47:30 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:53.013 18:47:30 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:53.013 18:47:30 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:53.013 18:47:30 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.013 18:47:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:53.013 18:47:30 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:53.013 18:47:30 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:53.013 18:47:30 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.013 18:47:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:53.013 18:47:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:53.013 18:47:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:53.013 18:47:30 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:04:53.013 18:47:30 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:53.013 18:47:30 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:53.013 18:47:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.013 18:47:30 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:53.013 18:47:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.013 18:47:30 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:53.013 18:47:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.013 18:47:30 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:53.013 18:47:30 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:53.013 18:47:30 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:53.271 request: 00:04:53.271 { 00:04:53.271 "method": "env_dpdk_get_mem_stats", 00:04:53.271 "req_id": 1 00:04:53.271 } 00:04:53.271 Got JSON-RPC error response 00:04:53.271 response: 00:04:53.271 { 00:04:53.271 "code": -32601, 00:04:53.271 "message": "Method not found" 00:04:53.271 } 00:04:53.271 18:47:30 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:04:53.271 18:47:30 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:53.271 18:47:30 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:53.271 18:47:30 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:53.271 18:47:30 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3036007 00:04:53.271 18:47:30 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 3036007 ']' 00:04:53.271 18:47:30 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 3036007 00:04:53.271 18:47:30 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:04:53.271 18:47:30 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.271 18:47:30 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3036007 00:04:53.271 18:47:30 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:53.271 18:47:30 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:53.271 18:47:30 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3036007' 00:04:53.271 killing process with pid 3036007 00:04:53.271 18:47:30 app_cmdline -- common/autotest_common.sh@969 -- # kill 3036007 00:04:53.271 18:47:30 app_cmdline -- common/autotest_common.sh@974 -- # wait 3036007 00:04:53.836 00:04:53.836 real 0m1.561s 00:04:53.836 user 0m1.895s 00:04:53.836 sys 0m0.449s 00:04:53.836 18:47:31 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.836 18:47:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:53.836 ************************************ 00:04:53.836 END TEST app_cmdline 00:04:53.836 ************************************ 00:04:53.836 18:47:31 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:53.836 18:47:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.836 18:47:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.836 18:47:31 -- common/autotest_common.sh@10 -- # set +x 00:04:53.836 ************************************ 00:04:53.836 START TEST version 00:04:53.836 ************************************ 00:04:53.836 18:47:31 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:53.836 * Looking for test storage... 00:04:53.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:53.836 18:47:31 version -- app/version.sh@17 -- # get_header_version major 00:04:53.836 18:47:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:53.836 18:47:31 version -- app/version.sh@14 -- # cut -f2 00:04:53.836 18:47:31 version -- app/version.sh@14 -- # tr -d '"' 00:04:53.836 18:47:31 version -- app/version.sh@17 -- # major=24 00:04:53.836 18:47:31 version -- app/version.sh@18 -- # get_header_version minor 00:04:53.836 18:47:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:53.836 18:47:31 version -- app/version.sh@14 -- # cut -f2 00:04:53.836 18:47:31 version -- app/version.sh@14 -- # tr -d '"' 00:04:53.836 18:47:31 version -- app/version.sh@18 -- # minor=9 00:04:53.836 18:47:31 version -- app/version.sh@19 -- # get_header_version patch 00:04:53.836 18:47:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:53.836 18:47:31 version -- app/version.sh@14 -- # cut -f2 00:04:53.836 18:47:31 version -- app/version.sh@14 -- # tr -d '"' 00:04:53.836 18:47:31 version -- app/version.sh@19 -- # patch=0 00:04:53.836 18:47:31 version -- app/version.sh@20 -- # get_header_version suffix 00:04:53.836 18:47:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:53.836 18:47:31 version -- app/version.sh@14 -- # cut -f2 00:04:53.836 18:47:31 version -- app/version.sh@14 -- # tr -d '"' 00:04:54.095 18:47:31 version -- app/version.sh@20 -- # suffix=-pre 00:04:54.095 18:47:31 version -- app/version.sh@22 -- # version=24.9 00:04:54.095 18:47:31 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:54.095 18:47:31 version -- app/version.sh@28 -- # version=24.9rc0 00:04:54.095 18:47:31 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:54.095 18:47:31 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:54.095 18:47:31 version -- app/version.sh@30 -- # py_version=24.9rc0 00:04:54.095 18:47:31 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:04:54.095 00:04:54.095 real 0m0.097s 00:04:54.095 user 0m0.052s 00:04:54.095 sys 0m0.066s 00:04:54.095 18:47:31 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.095 18:47:31 version -- common/autotest_common.sh@10 -- # set +x 00:04:54.095 ************************************ 00:04:54.095 END TEST version 00:04:54.095 ************************************ 00:04:54.095 18:47:31 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:04:54.095 18:47:31 -- spdk/autotest.sh@202 -- # uname -s 00:04:54.095 18:47:31 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:04:54.095 18:47:31 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:04:54.095 18:47:31 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:04:54.095 18:47:31 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:04:54.095 18:47:31 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:04:54.095 18:47:31 -- spdk/autotest.sh@264 -- # timing_exit lib 00:04:54.095 18:47:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:54.095 18:47:31 -- common/autotest_common.sh@10 -- # set +x 00:04:54.095 18:47:31 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:04:54.095 18:47:31 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:04:54.095 18:47:31 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:04:54.095 18:47:31 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:04:54.095 18:47:31 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:04:54.095 18:47:31 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:04:54.095 18:47:31 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:54.095 18:47:31 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:54.095 18:47:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.095 18:47:31 -- common/autotest_common.sh@10 -- # set +x 00:04:54.095 ************************************ 00:04:54.095 START TEST nvmf_tcp 00:04:54.095 ************************************ 00:04:54.095 18:47:31 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:54.095 * Looking for test storage... 00:04:54.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:54.095 18:47:31 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:54.095 18:47:31 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:54.095 18:47:31 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:54.095 18:47:31 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:54.095 18:47:31 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.095 18:47:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.095 ************************************ 00:04:54.095 START TEST nvmf_target_core 00:04:54.095 ************************************ 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:54.095 * Looking for test storage... 00:04:54.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:54.095 18:47:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.096 18:47:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.096 18:47:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.096 18:47:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:54.096 18:47:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.096 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:04:54.096 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:54.096 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:54.096 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:54.096 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:54.096 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:54.096 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:54.096 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:54.096 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:54.096 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:54.096 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:54.096 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:54.096 18:47:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:54.096 18:47:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:54.096 18:47:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.096 18:47:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:54.355 ************************************ 00:04:54.355 START TEST nvmf_abort 00:04:54.355 ************************************ 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:54.355 * Looking for test storage... 00:04:54.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:04:54.355 18:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:04:56.256 Found 0000:09:00.0 (0x8086 - 0x159b) 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:04:56.256 Found 0000:09:00.1 (0x8086 - 0x159b) 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:04:56.256 Found net devices under 0000:09:00.0: cvl_0_0 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:56.256 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:04:56.257 Found net devices under 0000:09:00.1: cvl_0_1 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:04:56.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:56.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:04:56.257 00:04:56.257 --- 10.0.0.2 ping statistics --- 00:04:56.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:56.257 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:56.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:56.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:04:56.257 00:04:56.257 --- 10.0.0.1 ping statistics --- 00:04:56.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:56.257 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3038056 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3038056 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3038056 ']' 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:56.257 18:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:56.257 [2024-07-24 18:47:33.853880] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:04:56.257 [2024-07-24 18:47:33.853959] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:56.515 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.515 [2024-07-24 18:47:33.919222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:56.515 [2024-07-24 18:47:34.033007] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:56.515 [2024-07-24 18:47:34.033057] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:56.515 [2024-07-24 18:47:34.033087] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:56.515 [2024-07-24 18:47:34.033098] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:56.515 [2024-07-24 18:47:34.033116] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:56.515 [2024-07-24 18:47:34.033245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.515 [2024-07-24 18:47:34.033507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:56.515 [2024-07-24 18:47:34.033511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:56.774 [2024-07-24 18:47:34.183783] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:56.774 Malloc0 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:56.774 Delay0 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:56.774 [2024-07-24 18:47:34.258894] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.774 18:47:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:56.774 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.039 [2024-07-24 18:47:34.407229] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:04:58.932 Initializing NVMe Controllers 00:04:58.932 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:04:58.932 controller IO queue size 128 less than required 00:04:58.932 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:04:58.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:04:58.933 Initialization complete. Launching workers. 00:04:58.933 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33478 00:04:58.933 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33539, failed to submit 62 00:04:58.933 success 33482, unsuccess 57, failed 0 00:04:58.933 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:04:58.933 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.933 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.933 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.933 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:04:58.933 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:04:58.933 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:04:58.933 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:04:58.933 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:04:58.933 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:04:58.933 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:04:58.933 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:04:58.933 rmmod nvme_tcp 00:04:58.933 rmmod nvme_fabrics 00:04:58.933 rmmod nvme_keyring 00:04:59.190 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:04:59.190 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:04:59.190 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:04:59.190 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3038056 ']' 00:04:59.190 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3038056 00:04:59.190 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3038056 ']' 00:04:59.190 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3038056 00:04:59.190 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:04:59.190 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:59.190 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3038056 00:04:59.190 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:04:59.190 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:04:59.190 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3038056' 00:04:59.190 killing process with pid 3038056 00:04:59.190 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3038056 00:04:59.190 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3038056 00:04:59.448 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:04:59.448 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:04:59.448 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:04:59.448 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:04:59.448 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:04:59.448 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:59.448 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:59.448 18:47:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:01.350 18:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:01.350 00:05:01.350 real 0m7.238s 00:05:01.350 user 0m10.700s 00:05:01.350 sys 0m2.467s 00:05:01.350 18:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.350 18:47:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.350 ************************************ 00:05:01.350 END TEST nvmf_abort 00:05:01.350 ************************************ 00:05:01.608 18:47:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:01.608 18:47:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:01.608 18:47:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.608 18:47:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:01.608 ************************************ 00:05:01.608 START TEST nvmf_ns_hotplug_stress 00:05:01.608 ************************************ 00:05:01.608 18:47:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:01.608 * Looking for test storage... 00:05:01.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:01.608 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.609 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.609 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.609 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:01.609 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:01.609 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:01.609 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:01.609 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:01.609 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:01.609 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:01.609 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:01.609 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:01.609 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:01.609 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:01.609 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:01.609 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:01.609 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:01.609 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:01.609 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:05:01.609 18:47:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:03.510 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:05:03.768 Found 0000:09:00.0 (0x8086 - 0x159b) 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:05:03.768 Found 0000:09:00.1 (0x8086 - 0x159b) 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:05:03.768 Found net devices under 0000:09:00.0: cvl_0_0 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:05:03.768 Found net devices under 0000:09:00.1: cvl_0_1 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:03.768 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:03.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:03.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:05:03.769 00:05:03.769 --- 10.0.0.2 ping statistics --- 00:05:03.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:03.769 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:03.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:03.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:05:03.769 00:05:03.769 --- 10.0.0.1 ping statistics --- 00:05:03.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:03.769 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3040342 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3040342 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3040342 ']' 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:03.769 18:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:03.769 [2024-07-24 18:47:41.323491] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:05:03.769 [2024-07-24 18:47:41.323556] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:03.769 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.026 [2024-07-24 18:47:41.392266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:04.026 [2024-07-24 18:47:41.508543] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:04.026 [2024-07-24 18:47:41.508596] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:04.027 [2024-07-24 18:47:41.508621] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:04.027 [2024-07-24 18:47:41.508634] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:04.027 [2024-07-24 18:47:41.508645] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:04.027 [2024-07-24 18:47:41.508745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.027 [2024-07-24 18:47:41.508840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.027 [2024-07-24 18:47:41.508843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.961 18:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.961 18:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:05:04.961 18:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:04.961 18:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:04.961 18:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:04.961 18:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:04.961 18:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:04.961 18:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:04.961 [2024-07-24 18:47:42.520450] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:04.961 18:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:05.218 18:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:05.476 [2024-07-24 18:47:43.018759] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:05.476 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:05.733 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:05.992 Malloc0 00:05:05.992 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:06.249 Delay0 00:05:06.249 18:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:06.506 18:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:06.763 NULL1 00:05:06.763 18:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:07.020 18:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3040716 00:05:07.020 18:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:07.020 18:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:07.020 18:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:07.020 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.394 Read completed with error (sct=0, sc=11) 00:05:08.394 18:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:08.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.394 18:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:08.394 18:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:08.650 true 00:05:08.650 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:08.650 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:09.583 18:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:09.841 18:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:09.841 18:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:10.099 true 00:05:10.099 18:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:10.099 18:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:10.356 18:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:10.614 18:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:10.614 18:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:10.871 true 00:05:10.871 18:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:10.871 18:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.128 18:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.385 18:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:11.385 18:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:11.385 true 00:05:11.385 18:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:11.385 18:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.757 18:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:12.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:12.757 18:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:12.757 18:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:13.015 true 00:05:13.015 18:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:13.015 18:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.273 18:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.530 18:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:13.530 18:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:13.788 true 00:05:13.788 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:13.788 18:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.721 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.979 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:14.979 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:15.236 true 00:05:15.236 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:15.236 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.493 18:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.750 18:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:15.750 18:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:16.006 true 00:05:16.006 18:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:16.006 18:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.968 18:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.968 18:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:16.968 18:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:17.225 true 00:05:17.225 18:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:17.225 18:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.483 18:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.741 18:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:17.741 18:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:17.998 true 00:05:17.998 18:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:17.998 18:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.930 18:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.188 18:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:19.188 18:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:19.445 true 00:05:19.445 18:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:19.445 18:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.703 18:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.961 18:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:19.961 18:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:20.218 true 00:05:20.218 18:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:20.218 18:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.150 18:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.151 18:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:21.151 18:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:21.408 true 00:05:21.408 18:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:21.408 18:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.665 18:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.923 18:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:21.923 18:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:22.180 true 00:05:22.180 18:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:22.180 18:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.113 18:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.370 18:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:23.370 18:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:23.628 true 00:05:23.628 18:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:23.628 18:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.192 18:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.192 18:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:24.192 18:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:24.450 true 00:05:24.450 18:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:24.450 18:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.382 18:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.639 18:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:25.639 18:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:25.897 true 00:05:25.897 18:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:25.897 18:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.161 18:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.418 18:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:26.418 18:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:26.676 true 00:05:26.676 18:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:26.676 18:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.609 18:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.866 18:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:27.866 18:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:27.866 true 00:05:28.123 18:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:28.123 18:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.123 18:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.380 18:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:28.638 18:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:28.638 true 00:05:28.638 18:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:28.638 18:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.570 18:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.827 18:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:29.827 18:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:30.084 true 00:05:30.084 18:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:30.084 18:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.341 18:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.598 18:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:30.598 18:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:30.855 true 00:05:30.855 18:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:30.855 18:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.112 18:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.369 18:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:31.369 18:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:31.625 true 00:05:31.625 18:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:31.625 18:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.995 18:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.995 18:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:32.995 18:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:33.252 true 00:05:33.252 18:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:33.252 18:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.514 18:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.817 18:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:33.817 18:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:34.075 true 00:05:34.075 18:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:34.075 18:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.639 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.896 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.896 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.154 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:35.154 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:35.154 true 00:05:35.154 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:35.154 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.411 18:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.668 18:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:35.668 18:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:35.925 true 00:05:35.925 18:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:35.925 18:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.854 18:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.112 18:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:37.112 18:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:37.369 Initializing NVMe Controllers 00:05:37.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:37.369 Controller IO queue size 128, less than required. 00:05:37.369 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:37.369 Controller IO queue size 128, less than required. 00:05:37.369 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:37.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:37.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:37.369 Initialization complete. Launching workers. 00:05:37.369 ======================================================== 00:05:37.369 Latency(us) 00:05:37.369 Device Information : IOPS MiB/s Average min max 00:05:37.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 758.29 0.37 87679.65 2494.33 1032200.01 00:05:37.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10186.86 4.97 12566.48 3595.06 449259.67 00:05:37.369 ======================================================== 00:05:37.369 Total : 10945.15 5.34 17770.40 2494.33 1032200.01 00:05:37.369 00:05:37.369 true 00:05:37.369 18:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3040716 00:05:37.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3040716) - No such process 00:05:37.369 18:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3040716 00:05:37.369 18:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.627 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:37.885 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:37.885 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:37.885 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:37.885 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:37.885 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:38.142 null0 00:05:38.142 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.142 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.142 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:38.399 null1 00:05:38.399 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.399 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.399 18:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:38.657 null2 00:05:38.657 18:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.657 18:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.657 18:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:38.916 null3 00:05:38.916 18:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:38.916 18:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.916 18:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:39.174 null4 00:05:39.174 18:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:39.174 18:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:39.174 18:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:39.432 null5 00:05:39.432 18:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:39.432 18:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:39.432 18:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:39.689 null6 00:05:39.689 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:39.689 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:39.689 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:39.947 null7 00:05:39.947 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:39.947 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:39.947 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:39.947 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.947 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3045385 3045386 3045388 3045390 3045392 3045394 3045396 3045398 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:39.948 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:40.207 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:40.207 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:40.207 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:40.207 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:40.207 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:40.207 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:40.207 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.207 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.466 18:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:40.724 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:40.724 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:40.724 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:40.724 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:40.724 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:40.724 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:40.724 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.724 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.983 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:41.241 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:41.241 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:41.241 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:41.241 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:41.241 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:41.241 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.241 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:41.241 18:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.499 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:41.757 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:41.757 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:41.757 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:41.757 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.757 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:41.757 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:41.757 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:41.757 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.016 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:42.275 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:42.275 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:42.275 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:42.275 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:42.275 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:42.275 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.275 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:42.275 18:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:42.533 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.533 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.533 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:42.533 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.533 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.533 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:42.533 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.533 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.533 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:42.791 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.791 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.791 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:42.791 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.791 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.791 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:42.791 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.791 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.791 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:42.791 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.791 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.791 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:42.791 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.791 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.791 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:43.049 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:43.049 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:43.049 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:43.049 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:43.049 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.049 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:43.049 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:43.049 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:43.307 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.307 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.307 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:43.307 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.307 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.307 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:43.307 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.307 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.307 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:43.307 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.307 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.307 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:43.307 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.307 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.307 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:43.307 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.307 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.307 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:43.307 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.307 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.308 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:43.308 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.308 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.308 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:43.565 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:43.565 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:43.565 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.565 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:43.565 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:43.566 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:43.566 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:43.566 18:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.824 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:44.082 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:44.082 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:44.082 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.082 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:44.082 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:44.082 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:44.082 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:44.082 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:44.340 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.340 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.340 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:44.340 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.340 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.341 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:44.341 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.341 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.341 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:44.341 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.341 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.341 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:44.341 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.341 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.341 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:44.341 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.341 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.341 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.341 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.341 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:44.341 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:44.341 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.341 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.341 18:48:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:44.599 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:44.599 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:44.599 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.599 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:44.599 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:44.599 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:44.599 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:44.599 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.857 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:45.115 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:45.115 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:45.115 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:45.115 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.115 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:45.115 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:45.115 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:45.115 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:45.373 rmmod nvme_tcp 00:05:45.373 rmmod nvme_fabrics 00:05:45.373 rmmod nvme_keyring 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3040342 ']' 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3040342 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3040342 ']' 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3040342 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.373 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3040342 00:05:45.630 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:45.630 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:45.630 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3040342' 00:05:45.630 killing process with pid 3040342 00:05:45.630 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3040342 00:05:45.630 18:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3040342 00:05:45.889 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:45.889 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:45.889 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:45.889 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:45.889 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:45.889 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:45.889 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:45.889 18:48:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:47.793 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:47.793 00:05:47.793 real 0m46.330s 00:05:47.793 user 3m29.872s 00:05:47.793 sys 0m16.424s 00:05:47.793 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.793 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:47.793 ************************************ 00:05:47.793 END TEST nvmf_ns_hotplug_stress 00:05:47.793 ************************************ 00:05:47.793 18:48:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:47.793 18:48:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:47.793 18:48:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.793 18:48:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:47.793 ************************************ 00:05:47.793 START TEST nvmf_delete_subsystem 00:05:47.793 ************************************ 00:05:47.793 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:48.052 * Looking for test storage... 00:05:48.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:05:48.052 18:48:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:05:49.956 Found 0000:09:00.0 (0x8086 - 0x159b) 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:05:49.956 Found 0000:09:00.1 (0x8086 - 0x159b) 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:49.956 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:05:49.957 Found net devices under 0000:09:00.0: cvl_0_0 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:05:49.957 Found net devices under 0000:09:00.1: cvl_0_1 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:49.957 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:50.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:50.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:05:50.215 00:05:50.215 --- 10.0.0.2 ping statistics --- 00:05:50.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:50.215 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:05:50.215 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:50.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:50.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:05:50.216 00:05:50.216 --- 10.0.0.1 ping statistics --- 00:05:50.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:50.216 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:05:50.216 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:50.216 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:05:50.216 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:50.216 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:50.216 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:50.216 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:50.216 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:50.216 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:50.216 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:50.216 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:50.216 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:50.216 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:50.216 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:50.216 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3048152 00:05:50.216 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:50.216 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3048152 00:05:50.216 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3048152 ']' 00:05:50.216 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.216 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.216 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.216 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.216 18:48:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:50.216 [2024-07-24 18:48:27.635341] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:05:50.216 [2024-07-24 18:48:27.635443] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:50.216 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.216 [2024-07-24 18:48:27.703260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.474 [2024-07-24 18:48:27.822774] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:50.474 [2024-07-24 18:48:27.822820] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:50.474 [2024-07-24 18:48:27.822850] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:50.474 [2024-07-24 18:48:27.822861] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:50.474 [2024-07-24 18:48:27.822871] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:50.474 [2024-07-24 18:48:27.822922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.474 [2024-07-24 18:48:27.822928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:51.065 [2024-07-24 18:48:28.596720] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:51.065 [2024-07-24 18:48:28.613063] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:51.065 NULL1 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:51.065 Delay0 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3048308 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:51.065 18:48:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:51.323 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.323 [2024-07-24 18:48:28.687656] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:53.241 18:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:53.241 18:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.241 18:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 starting I/O failed: -6 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Write completed with error (sct=0, sc=8) 00:05:53.499 starting I/O failed: -6 00:05:53.499 Write completed with error (sct=0, sc=8) 00:05:53.499 Write completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 starting I/O failed: -6 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 starting I/O failed: -6 00:05:53.499 Write completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Write completed with error (sct=0, sc=8) 00:05:53.499 starting I/O failed: -6 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Write completed with error (sct=0, sc=8) 00:05:53.499 starting I/O failed: -6 00:05:53.499 Write completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 starting I/O failed: -6 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Write completed with error (sct=0, sc=8) 00:05:53.499 Write completed with error (sct=0, sc=8) 00:05:53.499 Write completed with error (sct=0, sc=8) 00:05:53.499 starting I/O failed: -6 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 starting I/O failed: -6 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Write completed with error (sct=0, sc=8) 00:05:53.499 starting I/O failed: -6 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 [2024-07-24 18:48:30.906926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e85c0 is same with the state(5) to be set 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Write completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Write completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.499 Write completed with error (sct=0, sc=8) 00:05:53.499 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 starting I/O failed: -6 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 starting I/O failed: -6 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 starting I/O failed: -6 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 starting I/O failed: -6 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 starting I/O failed: -6 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 starting I/O failed: -6 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 starting I/O failed: -6 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 starting I/O failed: -6 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 starting I/O failed: -6 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 starting I/O failed: -6 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 starting I/O failed: -6 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 [2024-07-24 18:48:30.907825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2b0400d490 is same with the state(5) to be set 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Write completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:53.500 Read completed with error (sct=0, sc=8) 00:05:54.434 [2024-07-24 18:48:31.867651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e9ac0 is same with the state(5) to be set 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 [2024-07-24 18:48:31.907999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2b0400d000 is same with the state(5) to be set 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 [2024-07-24 18:48:31.908228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2b0400d7c0 is same with the state(5) to be set 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 [2024-07-24 18:48:31.910723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e83e0 is same with the state(5) to be set 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Write completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.434 Read completed with error (sct=0, sc=8) 00:05:54.435 Read completed with error (sct=0, sc=8) 00:05:54.435 Read completed with error (sct=0, sc=8) 00:05:54.435 [2024-07-24 18:48:31.911336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e88f0 is same with the state(5) to be set 00:05:54.435 Initializing NVMe Controllers 00:05:54.435 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:54.435 Controller IO queue size 128, less than required. 00:05:54.435 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:54.435 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:54.435 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:54.435 Initialization complete. Launching workers. 00:05:54.435 ======================================================== 00:05:54.435 Latency(us) 00:05:54.435 Device Information : IOPS MiB/s Average min max 00:05:54.435 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 153.41 0.07 938110.37 468.99 1011646.29 00:05:54.435 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.34 0.08 978002.68 385.62 2001666.52 00:05:54.435 ======================================================== 00:05:54.435 Total : 316.75 0.15 958681.80 385.62 2001666.52 00:05:54.435 00:05:54.435 [2024-07-24 18:48:31.911852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e9ac0 (9): Bad file descriptor 00:05:54.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:05:54.435 18:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.435 18:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:05:54.435 18:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3048308 00:05:54.435 18:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3048308 00:05:55.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3048308) - No such process 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3048308 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3048308 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3048308 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:55.001 [2024-07-24 18:48:32.434694] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3048724 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3048724 00:05:55.001 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:55.001 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.001 [2024-07-24 18:48:32.499950] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:55.566 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:55.566 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3048724 00:05:55.566 18:48:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:56.131 18:48:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:56.131 18:48:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3048724 00:05:56.131 18:48:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:56.389 18:48:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:56.389 18:48:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3048724 00:05:56.389 18:48:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:56.953 18:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:56.953 18:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3048724 00:05:56.953 18:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:57.518 18:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:57.518 18:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3048724 00:05:57.518 18:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:58.083 18:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:58.083 18:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3048724 00:05:58.083 18:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:58.341 Initializing NVMe Controllers 00:05:58.341 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:58.341 Controller IO queue size 128, less than required. 00:05:58.341 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:58.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:58.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:58.341 Initialization complete. Launching workers. 00:05:58.341 ======================================================== 00:05:58.341 Latency(us) 00:05:58.341 Device Information : IOPS MiB/s Average min max 00:05:58.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003782.12 1000215.10 1041702.94 00:05:58.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005635.54 1000425.77 1012659.07 00:05:58.341 ======================================================== 00:05:58.341 Total : 256.00 0.12 1004708.83 1000215.10 1041702.94 00:05:58.341 00:05:58.600 18:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:58.600 18:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3048724 00:05:58.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3048724) - No such process 00:05:58.600 18:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3048724 00:05:58.600 18:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:58.600 18:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:05:58.600 18:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:58.600 18:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:05:58.600 18:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:58.600 18:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:05:58.600 18:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:58.600 18:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:58.600 rmmod nvme_tcp 00:05:58.600 rmmod nvme_fabrics 00:05:58.600 rmmod nvme_keyring 00:05:58.600 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:58.600 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:05:58.600 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:05:58.600 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3048152 ']' 00:05:58.600 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3048152 00:05:58.600 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3048152 ']' 00:05:58.600 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3048152 00:05:58.600 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:05:58.600 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.600 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3048152 00:05:58.600 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.600 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.600 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3048152' 00:05:58.600 killing process with pid 3048152 00:05:58.601 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3048152 00:05:58.601 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3048152 00:05:58.860 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:58.860 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:58.860 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:58.860 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:58.860 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:58.860 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:58.860 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:58.860 18:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:00.766 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:00.766 00:06:00.766 real 0m12.986s 00:06:00.766 user 0m29.402s 00:06:00.766 sys 0m3.026s 00:06:00.766 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.766 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.766 ************************************ 00:06:00.766 END TEST nvmf_delete_subsystem 00:06:00.766 ************************************ 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:01.025 ************************************ 00:06:01.025 START TEST nvmf_host_management 00:06:01.025 ************************************ 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:01.025 * Looking for test storage... 00:06:01.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:01.025 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:01.026 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:01.026 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:06:01.026 18:48:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:02.927 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:02.927 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:02.927 Found net devices under 0000:09:00.0: cvl_0_0 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:02.927 Found net devices under 0000:09:00.1: cvl_0_1 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:02.927 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:03.185 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:03.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:03.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:06:03.186 00:06:03.186 --- 10.0.0.2 ping statistics --- 00:06:03.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.186 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:03.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:03.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:06:03.186 00:06:03.186 --- 10.0.0.1 ping statistics --- 00:06:03.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.186 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3051175 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3051175 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3051175 ']' 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.186 18:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:03.186 [2024-07-24 18:48:40.702749] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:06:03.186 [2024-07-24 18:48:40.702832] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:03.186 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.186 [2024-07-24 18:48:40.766762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.443 [2024-07-24 18:48:40.875270] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:03.443 [2024-07-24 18:48:40.875320] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:03.443 [2024-07-24 18:48:40.875349] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:03.443 [2024-07-24 18:48:40.875361] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:03.443 [2024-07-24 18:48:40.875371] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:03.443 [2024-07-24 18:48:40.875505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.443 [2024-07-24 18:48:40.875569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.443 [2024-07-24 18:48:40.875618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:03.443 [2024-07-24 18:48:40.875621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.376 [2024-07-24 18:48:41.676723] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.376 Malloc0 00:06:04.376 [2024-07-24 18:48:41.737266] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3051357 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3051357 /var/tmp/bdevperf.sock 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3051357 ']' 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:04.376 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.377 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:06:04.377 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:04.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:04.377 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:06:04.377 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.377 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:06:04.377 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.377 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:06:04.377 { 00:06:04.377 "params": { 00:06:04.377 "name": "Nvme$subsystem", 00:06:04.377 "trtype": "$TEST_TRANSPORT", 00:06:04.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:04.377 "adrfam": "ipv4", 00:06:04.377 "trsvcid": "$NVMF_PORT", 00:06:04.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:04.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:04.377 "hdgst": ${hdgst:-false}, 00:06:04.377 "ddgst": ${ddgst:-false} 00:06:04.377 }, 00:06:04.377 "method": "bdev_nvme_attach_controller" 00:06:04.377 } 00:06:04.377 EOF 00:06:04.377 )") 00:06:04.377 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:06:04.377 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:06:04.377 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:06:04.377 18:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:06:04.377 "params": { 00:06:04.377 "name": "Nvme0", 00:06:04.377 "trtype": "tcp", 00:06:04.377 "traddr": "10.0.0.2", 00:06:04.377 "adrfam": "ipv4", 00:06:04.377 "trsvcid": "4420", 00:06:04.377 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:04.377 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:04.377 "hdgst": false, 00:06:04.377 "ddgst": false 00:06:04.377 }, 00:06:04.377 "method": "bdev_nvme_attach_controller" 00:06:04.377 }' 00:06:04.377 [2024-07-24 18:48:41.816807] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:06:04.377 [2024-07-24 18:48:41.816878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3051357 ] 00:06:04.377 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.377 [2024-07-24 18:48:41.876935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.636 [2024-07-24 18:48:41.989164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.636 Running I/O for 10 seconds... 00:06:04.636 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.636 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:04.636 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:04.636 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.636 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.636 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.636 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:04.636 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:04.636 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:04.636 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:04.636 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:04.636 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:04.636 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:04.636 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:04.636 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:04.636 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:04.636 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.636 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.894 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.894 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=65 00:06:04.894 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 65 -ge 100 ']' 00:06:04.894 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:05.153 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:05.153 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:05.153 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:05.153 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:05.153 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.153 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.153 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.153 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:06:05.153 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:06:05.153 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:05.153 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:05.153 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:05.153 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:05.154 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.154 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.154 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.154 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:05.154 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.154 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.154 [2024-07-24 18:48:42.566649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.154 [2024-07-24 18:48:42.566694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.566712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.154 [2024-07-24 18:48:42.566726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.566740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.154 [2024-07-24 18:48:42.566753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.566766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.154 [2024-07-24 18:48:42.566779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.566792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f2790 is same with the state(5) to be set 00:06:05.154 [2024-07-24 18:48:42.567676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.567701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.567742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.567758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.567774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.567787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.567824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.567839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.567854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.567868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.567882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.567896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.567911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.567924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.567939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.567952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.567967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.567980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.567995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.568986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.568999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.569014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.569027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.569042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.569056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.569070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.569084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.569099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.569121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.569137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.569158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.569172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.569186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.569201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.569215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.569230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.569245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.569259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.154 [2024-07-24 18:48:42.569273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.154 [2024-07-24 18:48:42.569291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.155 [2024-07-24 18:48:42.569306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.155 [2024-07-24 18:48:42.569321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.155 [2024-07-24 18:48:42.569335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.155 [2024-07-24 18:48:42.569349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.155 [2024-07-24 18:48:42.569363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.155 [2024-07-24 18:48:42.569378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.155 [2024-07-24 18:48:42.569391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.155 [2024-07-24 18:48:42.569406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.155 [2024-07-24 18:48:42.569420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.155 [2024-07-24 18:48:42.569434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.155 [2024-07-24 18:48:42.569448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.155 [2024-07-24 18:48:42.569463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.155 [2024-07-24 18:48:42.569477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.155 [2024-07-24 18:48:42.569491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.155 [2024-07-24 18:48:42.569505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.155 [2024-07-24 18:48:42.569520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.155 [2024-07-24 18:48:42.569534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.155 [2024-07-24 18:48:42.569548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.155 [2024-07-24 18:48:42.569562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.155 [2024-07-24 18:48:42.569577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.155 [2024-07-24 18:48:42.569592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.155 [2024-07-24 18:48:42.569674] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd035a0 was disconnected and freed. reset controller. 00:06:05.155 [2024-07-24 18:48:42.570784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:06:05.155 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.155 18:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:05.155 task offset: 65536 on job bdev=Nvme0n1 fails 00:06:05.155 00:06:05.155 Latency(us) 00:06:05.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:05.155 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:05.155 Job: Nvme0n1 ended in about 0.40 seconds with error 00:06:05.155 Verification LBA range: start 0x0 length 0x400 00:06:05.155 Nvme0n1 : 0.40 1288.12 80.51 161.02 0.00 42939.99 2415.12 41748.86 00:06:05.155 =================================================================================================================== 00:06:05.155 Total : 1288.12 80.51 161.02 0.00 42939.99 2415.12 41748.86 00:06:05.155 [2024-07-24 18:48:42.572639] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.155 [2024-07-24 18:48:42.572667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f2790 (9): Bad file descriptor 00:06:05.155 [2024-07-24 18:48:42.583258] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:06.087 18:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3051357 00:06:06.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3051357) - No such process 00:06:06.087 18:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:06.087 18:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:06.087 18:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:06.087 18:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:06.087 18:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:06:06.087 18:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:06:06.087 18:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:06:06.087 18:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:06:06.087 { 00:06:06.087 "params": { 00:06:06.087 "name": "Nvme$subsystem", 00:06:06.087 "trtype": "$TEST_TRANSPORT", 00:06:06.087 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:06.087 "adrfam": "ipv4", 00:06:06.087 "trsvcid": "$NVMF_PORT", 00:06:06.087 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:06.087 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:06.087 "hdgst": ${hdgst:-false}, 00:06:06.087 "ddgst": ${ddgst:-false} 00:06:06.087 }, 00:06:06.087 "method": "bdev_nvme_attach_controller" 00:06:06.087 } 00:06:06.087 EOF 00:06:06.087 )") 00:06:06.087 18:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:06:06.087 18:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:06:06.087 18:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:06:06.087 18:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:06:06.087 "params": { 00:06:06.087 "name": "Nvme0", 00:06:06.087 "trtype": "tcp", 00:06:06.087 "traddr": "10.0.0.2", 00:06:06.087 "adrfam": "ipv4", 00:06:06.087 "trsvcid": "4420", 00:06:06.087 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:06.087 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:06.087 "hdgst": false, 00:06:06.087 "ddgst": false 00:06:06.087 }, 00:06:06.087 "method": "bdev_nvme_attach_controller" 00:06:06.087 }' 00:06:06.087 [2024-07-24 18:48:43.622719] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:06:06.087 [2024-07-24 18:48:43.622801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3051513 ] 00:06:06.087 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.087 [2024-07-24 18:48:43.683561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.344 [2024-07-24 18:48:43.796224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.603 Running I/O for 1 seconds... 00:06:07.536 00:06:07.536 Latency(us) 00:06:07.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:07.536 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:07.536 Verification LBA range: start 0x0 length 0x400 00:06:07.536 Nvme0n1 : 1.03 1373.40 85.84 0.00 0.00 45901.59 10631.40 39030.33 00:06:07.536 =================================================================================================================== 00:06:07.536 Total : 1373.40 85.84 0.00 0.00 45901.59 10631.40 39030.33 00:06:07.794 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:07.794 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:07.794 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:07.794 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:07.794 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:07.794 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:07.794 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:06:07.794 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:07.794 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:06:07.794 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:07.794 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:07.794 rmmod nvme_tcp 00:06:07.794 rmmod nvme_fabrics 00:06:07.794 rmmod nvme_keyring 00:06:07.794 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:07.794 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:06:07.794 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:06:07.794 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3051175 ']' 00:06:07.794 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3051175 00:06:07.794 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3051175 ']' 00:06:07.794 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3051175 00:06:07.794 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:06:07.794 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.794 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3051175 00:06:08.052 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:08.052 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:08.052 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3051175' 00:06:08.052 killing process with pid 3051175 00:06:08.052 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3051175 00:06:08.052 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3051175 00:06:08.311 [2024-07-24 18:48:45.661342] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:08.311 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:08.311 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:08.311 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:08.311 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:08.311 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:08.311 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:08.311 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:08.311 18:48:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.213 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:10.213 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:10.213 00:06:10.213 real 0m9.328s 00:06:10.213 user 0m22.195s 00:06:10.213 sys 0m2.722s 00:06:10.213 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.213 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.214 ************************************ 00:06:10.214 END TEST nvmf_host_management 00:06:10.214 ************************************ 00:06:10.214 18:48:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:10.214 18:48:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:10.214 18:48:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.214 18:48:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:10.214 ************************************ 00:06:10.214 START TEST nvmf_lvol 00:06:10.214 ************************************ 00:06:10.214 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:10.485 * Looking for test storage... 00:06:10.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:06:10.485 18:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:12.433 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:12.433 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:12.434 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:12.434 Found net devices under 0000:09:00.0: cvl_0_0 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:12.434 Found net devices under 0000:09:00.1: cvl_0_1 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:12.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:12.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:06:12.434 00:06:12.434 --- 10.0.0.2 ping statistics --- 00:06:12.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:12.434 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:12.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:12.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:06:12.434 00:06:12.434 --- 10.0.0.1 ping statistics --- 00:06:12.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:12.434 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3053719 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3053719 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3053719 ']' 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.434 18:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:12.434 [2024-07-24 18:48:50.025305] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:06:12.434 [2024-07-24 18:48:50.025407] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:12.692 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.692 [2024-07-24 18:48:50.093274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.692 [2024-07-24 18:48:50.204094] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:12.692 [2024-07-24 18:48:50.204150] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:12.692 [2024-07-24 18:48:50.204179] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:12.692 [2024-07-24 18:48:50.204190] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:12.692 [2024-07-24 18:48:50.204200] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:12.692 [2024-07-24 18:48:50.204271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.692 [2024-07-24 18:48:50.204351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.692 [2024-07-24 18:48:50.204355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.623 18:48:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.624 18:48:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:06:13.624 18:48:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:13.624 18:48:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:13.624 18:48:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:13.624 18:48:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:13.624 18:48:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:13.624 [2024-07-24 18:48:51.221598] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.881 18:48:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:14.138 18:48:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:14.138 18:48:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:14.394 18:48:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:14.394 18:48:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:14.652 18:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:14.910 18:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ed1a42d9-2b05-4b95-ba9c-5dcb79294b3f 00:06:14.910 18:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ed1a42d9-2b05-4b95-ba9c-5dcb79294b3f lvol 20 00:06:15.168 18:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=35f80ba6-ea3d-4dc3-8574-977eba5570ff 00:06:15.168 18:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:15.425 18:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 35f80ba6-ea3d-4dc3-8574-977eba5570ff 00:06:15.682 18:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:15.939 [2024-07-24 18:48:53.285868] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.939 18:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:16.196 18:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3054152 00:06:16.196 18:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:16.196 18:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:16.196 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.128 18:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 35f80ba6-ea3d-4dc3-8574-977eba5570ff MY_SNAPSHOT 00:06:17.386 18:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=db0d6ffa-d72e-43f9-a6a9-bef26c074288 00:06:17.386 18:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 35f80ba6-ea3d-4dc3-8574-977eba5570ff 30 00:06:17.644 18:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone db0d6ffa-d72e-43f9-a6a9-bef26c074288 MY_CLONE 00:06:17.902 18:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b535a7c2-6c4d-4a12-8190-a18f1246484b 00:06:17.902 18:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b535a7c2-6c4d-4a12-8190-a18f1246484b 00:06:18.468 18:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3054152 00:06:26.578 Initializing NVMe Controllers 00:06:26.578 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:26.578 Controller IO queue size 128, less than required. 00:06:26.578 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:26.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:26.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:26.578 Initialization complete. Launching workers. 00:06:26.578 ======================================================== 00:06:26.578 Latency(us) 00:06:26.578 Device Information : IOPS MiB/s Average min max 00:06:26.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9547.70 37.30 13410.25 2026.21 89952.80 00:06:26.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10707.70 41.83 11956.49 2172.42 55489.66 00:06:26.578 ======================================================== 00:06:26.578 Total : 20255.40 79.12 12641.74 2026.21 89952.80 00:06:26.578 00:06:26.578 18:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:26.835 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 35f80ba6-ea3d-4dc3-8574-977eba5570ff 00:06:27.092 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ed1a42d9-2b05-4b95-ba9c-5dcb79294b3f 00:06:27.350 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:27.350 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:27.350 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:27.350 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:27.350 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:06:27.350 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:27.350 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:06:27.350 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:27.350 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:27.350 rmmod nvme_tcp 00:06:27.350 rmmod nvme_fabrics 00:06:27.350 rmmod nvme_keyring 00:06:27.350 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:27.350 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:06:27.350 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:06:27.350 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3053719 ']' 00:06:27.350 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3053719 00:06:27.350 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3053719 ']' 00:06:27.350 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3053719 00:06:27.350 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:06:27.350 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.350 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3053719 00:06:27.350 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.351 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.351 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3053719' 00:06:27.351 killing process with pid 3053719 00:06:27.351 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3053719 00:06:27.351 18:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3053719 00:06:27.609 18:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:27.609 18:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:27.609 18:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:27.609 18:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:27.609 18:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:27.609 18:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.609 18:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:27.609 18:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:30.148 00:06:30.148 real 0m19.428s 00:06:30.148 user 1m4.819s 00:06:30.148 sys 0m6.283s 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:30.148 ************************************ 00:06:30.148 END TEST nvmf_lvol 00:06:30.148 ************************************ 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:30.148 ************************************ 00:06:30.148 START TEST nvmf_lvs_grow 00:06:30.148 ************************************ 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:30.148 * Looking for test storage... 00:06:30.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:30.148 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:30.149 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:30.149 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:30.149 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:30.149 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:30.149 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:30.149 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:30.149 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:30.149 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:30.149 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:30.149 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.149 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:30.149 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.149 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:30.149 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:30.149 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:06:30.149 18:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:32.054 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:32.054 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:32.054 Found net devices under 0000:09:00.0: cvl_0_0 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:32.054 Found net devices under 0000:09:00.1: cvl_0_1 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:32.054 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:32.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:32.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:06:32.054 00:06:32.054 --- 10.0.0.2 ping statistics --- 00:06:32.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.055 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:32.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:32.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:06:32.055 00:06:32.055 --- 10.0.0.1 ping statistics --- 00:06:32.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.055 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3057426 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3057426 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3057426 ']' 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.055 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:32.055 [2024-07-24 18:49:09.652539] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:06:32.055 [2024-07-24 18:49:09.652616] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.314 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.314 [2024-07-24 18:49:09.718843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.314 [2024-07-24 18:49:09.836125] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:32.314 [2024-07-24 18:49:09.836188] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:32.314 [2024-07-24 18:49:09.836204] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:32.314 [2024-07-24 18:49:09.836217] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:32.314 [2024-07-24 18:49:09.836228] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:32.314 [2024-07-24 18:49:09.836259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.572 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.572 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:06:32.572 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:32.572 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:32.572 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:32.572 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:32.572 18:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:32.830 [2024-07-24 18:49:10.264423] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.830 18:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:32.830 18:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.830 18:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.830 18:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:32.830 ************************************ 00:06:32.830 START TEST lvs_grow_clean 00:06:32.830 ************************************ 00:06:32.830 18:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:06:32.830 18:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:32.830 18:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:32.830 18:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:32.830 18:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:32.830 18:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:32.830 18:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:32.830 18:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:32.830 18:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:32.830 18:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:33.088 18:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:33.088 18:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:33.346 18:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=427517f2-de63-4f0f-9339-c4159476ec26 00:06:33.346 18:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 427517f2-de63-4f0f-9339-c4159476ec26 00:06:33.346 18:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:33.605 18:49:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:33.605 18:49:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:33.605 18:49:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 427517f2-de63-4f0f-9339-c4159476ec26 lvol 150 00:06:33.863 18:49:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f9af7d34-11d5-454b-8360-b97081898f85 00:06:33.863 18:49:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:33.863 18:49:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:34.124 [2024-07-24 18:49:11.620441] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:34.124 [2024-07-24 18:49:11.620536] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:34.124 true 00:06:34.124 18:49:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 427517f2-de63-4f0f-9339-c4159476ec26 00:06:34.124 18:49:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:34.428 18:49:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:34.428 18:49:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:34.686 18:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f9af7d34-11d5-454b-8360-b97081898f85 00:06:34.944 18:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:35.202 [2024-07-24 18:49:12.659586] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:35.202 18:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:35.460 18:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3057866 00:06:35.460 18:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:35.460 18:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:35.460 18:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3057866 /var/tmp/bdevperf.sock 00:06:35.460 18:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3057866 ']' 00:06:35.460 18:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:35.460 18:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.460 18:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:35.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:35.460 18:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.460 18:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:35.460 [2024-07-24 18:49:12.960605] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:06:35.460 [2024-07-24 18:49:12.960684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3057866 ] 00:06:35.460 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.460 [2024-07-24 18:49:13.021509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.719 [2024-07-24 18:49:13.143577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.719 18:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.719 18:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:06:35.719 18:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:36.282 Nvme0n1 00:06:36.282 18:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:36.540 [ 00:06:36.540 { 00:06:36.540 "name": "Nvme0n1", 00:06:36.540 "aliases": [ 00:06:36.540 "f9af7d34-11d5-454b-8360-b97081898f85" 00:06:36.540 ], 00:06:36.540 "product_name": "NVMe disk", 00:06:36.540 "block_size": 4096, 00:06:36.540 "num_blocks": 38912, 00:06:36.540 "uuid": "f9af7d34-11d5-454b-8360-b97081898f85", 00:06:36.540 "assigned_rate_limits": { 00:06:36.540 "rw_ios_per_sec": 0, 00:06:36.540 "rw_mbytes_per_sec": 0, 00:06:36.540 "r_mbytes_per_sec": 0, 00:06:36.540 "w_mbytes_per_sec": 0 00:06:36.540 }, 00:06:36.540 "claimed": false, 00:06:36.540 "zoned": false, 00:06:36.540 "supported_io_types": { 00:06:36.540 "read": true, 00:06:36.540 "write": true, 00:06:36.540 "unmap": true, 00:06:36.540 "flush": true, 00:06:36.540 "reset": true, 00:06:36.540 "nvme_admin": true, 00:06:36.540 "nvme_io": true, 00:06:36.540 "nvme_io_md": false, 00:06:36.540 "write_zeroes": true, 00:06:36.540 "zcopy": false, 00:06:36.540 "get_zone_info": false, 00:06:36.540 "zone_management": false, 00:06:36.540 "zone_append": false, 00:06:36.540 "compare": true, 00:06:36.540 "compare_and_write": true, 00:06:36.540 "abort": true, 00:06:36.540 "seek_hole": false, 00:06:36.540 "seek_data": false, 00:06:36.540 "copy": true, 00:06:36.540 "nvme_iov_md": false 00:06:36.540 }, 00:06:36.540 "memory_domains": [ 00:06:36.540 { 00:06:36.540 "dma_device_id": "system", 00:06:36.540 "dma_device_type": 1 00:06:36.540 } 00:06:36.540 ], 00:06:36.540 "driver_specific": { 00:06:36.540 "nvme": [ 00:06:36.540 { 00:06:36.540 "trid": { 00:06:36.540 "trtype": "TCP", 00:06:36.540 "adrfam": "IPv4", 00:06:36.540 "traddr": "10.0.0.2", 00:06:36.540 "trsvcid": "4420", 00:06:36.540 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:36.540 }, 00:06:36.540 "ctrlr_data": { 00:06:36.540 "cntlid": 1, 00:06:36.540 "vendor_id": "0x8086", 00:06:36.540 "model_number": "SPDK bdev Controller", 00:06:36.540 "serial_number": "SPDK0", 00:06:36.540 "firmware_revision": "24.09", 00:06:36.540 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:36.540 "oacs": { 00:06:36.540 "security": 0, 00:06:36.540 "format": 0, 00:06:36.540 "firmware": 0, 00:06:36.540 "ns_manage": 0 00:06:36.540 }, 00:06:36.540 "multi_ctrlr": true, 00:06:36.540 "ana_reporting": false 00:06:36.540 }, 00:06:36.540 "vs": { 00:06:36.540 "nvme_version": "1.3" 00:06:36.540 }, 00:06:36.540 "ns_data": { 00:06:36.540 "id": 1, 00:06:36.540 "can_share": true 00:06:36.540 } 00:06:36.540 } 00:06:36.540 ], 00:06:36.540 "mp_policy": "active_passive" 00:06:36.540 } 00:06:36.540 } 00:06:36.540 ] 00:06:36.540 18:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3057999 00:06:36.540 18:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:36.540 18:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:36.540 Running I/O for 10 seconds... 00:06:37.475 Latency(us) 00:06:37.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:37.475 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:37.475 Nvme0n1 : 1.00 14363.00 56.11 0.00 0.00 0.00 0.00 0.00 00:06:37.475 =================================================================================================================== 00:06:37.475 Total : 14363.00 56.11 0.00 0.00 0.00 0.00 0.00 00:06:37.475 00:06:38.408 18:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 427517f2-de63-4f0f-9339-c4159476ec26 00:06:38.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:38.666 Nvme0n1 : 2.00 14427.50 56.36 0.00 0.00 0.00 0.00 0.00 00:06:38.666 =================================================================================================================== 00:06:38.666 Total : 14427.50 56.36 0.00 0.00 0.00 0.00 0.00 00:06:38.666 00:06:38.666 true 00:06:38.666 18:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 427517f2-de63-4f0f-9339-c4159476ec26 00:06:38.666 18:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:38.924 18:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:38.924 18:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:38.924 18:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3057999 00:06:39.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:39.489 Nvme0n1 : 3.00 14532.00 56.77 0.00 0.00 0.00 0.00 0.00 00:06:39.489 =================================================================================================================== 00:06:39.489 Total : 14532.00 56.77 0.00 0.00 0.00 0.00 0.00 00:06:39.489 00:06:40.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:40.423 Nvme0n1 : 4.00 14584.75 56.97 0.00 0.00 0.00 0.00 0.00 00:06:40.423 =================================================================================================================== 00:06:40.423 Total : 14584.75 56.97 0.00 0.00 0.00 0.00 0.00 00:06:40.423 00:06:41.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:41.797 Nvme0n1 : 5.00 14629.20 57.15 0.00 0.00 0.00 0.00 0.00 00:06:41.797 =================================================================================================================== 00:06:41.797 Total : 14629.20 57.15 0.00 0.00 0.00 0.00 0.00 00:06:41.797 00:06:42.730 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:42.730 Nvme0n1 : 6.00 14668.83 57.30 0.00 0.00 0.00 0.00 0.00 00:06:42.730 =================================================================================================================== 00:06:42.730 Total : 14668.83 57.30 0.00 0.00 0.00 0.00 0.00 00:06:42.730 00:06:43.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:43.664 Nvme0n1 : 7.00 14690.29 57.38 0.00 0.00 0.00 0.00 0.00 00:06:43.664 =================================================================================================================== 00:06:43.664 Total : 14690.29 57.38 0.00 0.00 0.00 0.00 0.00 00:06:43.664 00:06:44.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:44.598 Nvme0n1 : 8.00 14719.88 57.50 0.00 0.00 0.00 0.00 0.00 00:06:44.598 =================================================================================================================== 00:06:44.598 Total : 14719.88 57.50 0.00 0.00 0.00 0.00 0.00 00:06:44.598 00:06:45.532 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:45.532 Nvme0n1 : 9.00 14746.56 57.60 0.00 0.00 0.00 0.00 0.00 00:06:45.532 =================================================================================================================== 00:06:45.532 Total : 14746.56 57.60 0.00 0.00 0.00 0.00 0.00 00:06:45.532 00:06:46.463 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:46.463 Nvme0n1 : 10.00 14766.60 57.68 0.00 0.00 0.00 0.00 0.00 00:06:46.463 =================================================================================================================== 00:06:46.463 Total : 14766.60 57.68 0.00 0.00 0.00 0.00 0.00 00:06:46.463 00:06:46.463 00:06:46.463 Latency(us) 00:06:46.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:46.463 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:46.463 Nvme0n1 : 10.01 14769.72 57.69 0.00 0.00 8661.47 4538.97 16505.36 00:06:46.463 =================================================================================================================== 00:06:46.463 Total : 14769.72 57.69 0.00 0.00 8661.47 4538.97 16505.36 00:06:46.463 0 00:06:46.463 18:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3057866 00:06:46.463 18:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3057866 ']' 00:06:46.463 18:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3057866 00:06:46.463 18:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:06:46.463 18:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:46.463 18:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3057866 00:06:46.721 18:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:46.721 18:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:46.721 18:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3057866' 00:06:46.721 killing process with pid 3057866 00:06:46.721 18:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3057866 00:06:46.721 Received shutdown signal, test time was about 10.000000 seconds 00:06:46.721 00:06:46.721 Latency(us) 00:06:46.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:46.721 =================================================================================================================== 00:06:46.721 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:46.721 18:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3057866 00:06:46.978 18:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:47.235 18:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:47.492 18:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 427517f2-de63-4f0f-9339-c4159476ec26 00:06:47.492 18:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:47.750 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:47.750 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:06:47.750 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:48.008 [2024-07-24 18:49:25.395884] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:48.008 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 427517f2-de63-4f0f-9339-c4159476ec26 00:06:48.008 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:06:48.008 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 427517f2-de63-4f0f-9339-c4159476ec26 00:06:48.008 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.008 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.008 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.008 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.008 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.008 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.008 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.008 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:48.008 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 427517f2-de63-4f0f-9339-c4159476ec26 00:06:48.266 request: 00:06:48.266 { 00:06:48.266 "uuid": "427517f2-de63-4f0f-9339-c4159476ec26", 00:06:48.266 "method": "bdev_lvol_get_lvstores", 00:06:48.266 "req_id": 1 00:06:48.266 } 00:06:48.266 Got JSON-RPC error response 00:06:48.266 response: 00:06:48.266 { 00:06:48.266 "code": -19, 00:06:48.266 "message": "No such device" 00:06:48.266 } 00:06:48.266 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:06:48.266 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.266 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:48.266 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.266 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:48.523 aio_bdev 00:06:48.523 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f9af7d34-11d5-454b-8360-b97081898f85 00:06:48.523 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=f9af7d34-11d5-454b-8360-b97081898f85 00:06:48.523 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:48.523 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:06:48.523 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:48.523 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:48.523 18:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:48.781 18:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f9af7d34-11d5-454b-8360-b97081898f85 -t 2000 00:06:49.044 [ 00:06:49.044 { 00:06:49.044 "name": "f9af7d34-11d5-454b-8360-b97081898f85", 00:06:49.044 "aliases": [ 00:06:49.044 "lvs/lvol" 00:06:49.044 ], 00:06:49.044 "product_name": "Logical Volume", 00:06:49.044 "block_size": 4096, 00:06:49.044 "num_blocks": 38912, 00:06:49.044 "uuid": "f9af7d34-11d5-454b-8360-b97081898f85", 00:06:49.044 "assigned_rate_limits": { 00:06:49.044 "rw_ios_per_sec": 0, 00:06:49.044 "rw_mbytes_per_sec": 0, 00:06:49.044 "r_mbytes_per_sec": 0, 00:06:49.044 "w_mbytes_per_sec": 0 00:06:49.044 }, 00:06:49.044 "claimed": false, 00:06:49.045 "zoned": false, 00:06:49.045 "supported_io_types": { 00:06:49.045 "read": true, 00:06:49.045 "write": true, 00:06:49.045 "unmap": true, 00:06:49.045 "flush": false, 00:06:49.045 "reset": true, 00:06:49.045 "nvme_admin": false, 00:06:49.045 "nvme_io": false, 00:06:49.045 "nvme_io_md": false, 00:06:49.045 "write_zeroes": true, 00:06:49.045 "zcopy": false, 00:06:49.045 "get_zone_info": false, 00:06:49.045 "zone_management": false, 00:06:49.045 "zone_append": false, 00:06:49.045 "compare": false, 00:06:49.045 "compare_and_write": false, 00:06:49.045 "abort": false, 00:06:49.045 "seek_hole": true, 00:06:49.045 "seek_data": true, 00:06:49.045 "copy": false, 00:06:49.045 "nvme_iov_md": false 00:06:49.045 }, 00:06:49.045 "driver_specific": { 00:06:49.045 "lvol": { 00:06:49.045 "lvol_store_uuid": "427517f2-de63-4f0f-9339-c4159476ec26", 00:06:49.045 "base_bdev": "aio_bdev", 00:06:49.045 "thin_provision": false, 00:06:49.045 "num_allocated_clusters": 38, 00:06:49.045 "snapshot": false, 00:06:49.045 "clone": false, 00:06:49.045 "esnap_clone": false 00:06:49.045 } 00:06:49.045 } 00:06:49.045 } 00:06:49.045 ] 00:06:49.045 18:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:06:49.045 18:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 427517f2-de63-4f0f-9339-c4159476ec26 00:06:49.045 18:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:06:49.303 18:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:06:49.303 18:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 427517f2-de63-4f0f-9339-c4159476ec26 00:06:49.303 18:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:06:49.303 18:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:06:49.303 18:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f9af7d34-11d5-454b-8360-b97081898f85 00:06:49.560 18:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 427517f2-de63-4f0f-9339-c4159476ec26 00:06:50.125 18:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:50.382 18:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:50.382 00:06:50.382 real 0m17.442s 00:06:50.382 user 0m16.876s 00:06:50.382 sys 0m1.913s 00:06:50.382 18:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.382 18:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:50.382 ************************************ 00:06:50.382 END TEST lvs_grow_clean 00:06:50.382 ************************************ 00:06:50.382 18:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:06:50.382 18:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:50.382 18:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.382 18:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:50.382 ************************************ 00:06:50.382 START TEST lvs_grow_dirty 00:06:50.382 ************************************ 00:06:50.382 18:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:06:50.382 18:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:50.382 18:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:50.382 18:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:50.382 18:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:50.382 18:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:50.382 18:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:50.382 18:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:50.383 18:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:50.383 18:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:50.640 18:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:50.641 18:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:50.898 18:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c4e826dc-5c59-4cbd-98ea-8967fd0acaff 00:06:50.898 18:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4e826dc-5c59-4cbd-98ea-8967fd0acaff 00:06:50.898 18:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:51.156 18:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:51.156 18:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:51.156 18:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c4e826dc-5c59-4cbd-98ea-8967fd0acaff lvol 150 00:06:51.414 18:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=eb822a83-8d99-4e97-a1dd-11f9f712468e 00:06:51.414 18:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:51.414 18:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:51.672 [2024-07-24 18:49:29.050290] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:51.672 [2024-07-24 18:49:29.050370] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:51.672 true 00:06:51.672 18:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4e826dc-5c59-4cbd-98ea-8967fd0acaff 00:06:51.672 18:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:51.930 18:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:51.930 18:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:52.190 18:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 eb822a83-8d99-4e97-a1dd-11f9f712468e 00:06:52.488 18:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:52.745 [2024-07-24 18:49:30.117516] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:52.745 18:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:53.003 18:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3060051 00:06:53.003 18:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:53.003 18:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:53.003 18:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3060051 /var/tmp/bdevperf.sock 00:06:53.003 18:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3060051 ']' 00:06:53.003 18:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:53.003 18:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.003 18:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:53.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:53.003 18:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.003 18:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:53.003 [2024-07-24 18:49:30.419131] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:06:53.003 [2024-07-24 18:49:30.419200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3060051 ] 00:06:53.003 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.003 [2024-07-24 18:49:30.479772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.003 [2024-07-24 18:49:30.597456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.936 18:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.936 18:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:06:53.936 18:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:54.194 Nvme0n1 00:06:54.194 18:49:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:54.452 [ 00:06:54.452 { 00:06:54.452 "name": "Nvme0n1", 00:06:54.452 "aliases": [ 00:06:54.452 "eb822a83-8d99-4e97-a1dd-11f9f712468e" 00:06:54.452 ], 00:06:54.452 "product_name": "NVMe disk", 00:06:54.452 "block_size": 4096, 00:06:54.452 "num_blocks": 38912, 00:06:54.452 "uuid": "eb822a83-8d99-4e97-a1dd-11f9f712468e", 00:06:54.452 "assigned_rate_limits": { 00:06:54.452 "rw_ios_per_sec": 0, 00:06:54.452 "rw_mbytes_per_sec": 0, 00:06:54.452 "r_mbytes_per_sec": 0, 00:06:54.452 "w_mbytes_per_sec": 0 00:06:54.452 }, 00:06:54.452 "claimed": false, 00:06:54.452 "zoned": false, 00:06:54.452 "supported_io_types": { 00:06:54.452 "read": true, 00:06:54.452 "write": true, 00:06:54.452 "unmap": true, 00:06:54.452 "flush": true, 00:06:54.452 "reset": true, 00:06:54.452 "nvme_admin": true, 00:06:54.452 "nvme_io": true, 00:06:54.452 "nvme_io_md": false, 00:06:54.452 "write_zeroes": true, 00:06:54.452 "zcopy": false, 00:06:54.452 "get_zone_info": false, 00:06:54.452 "zone_management": false, 00:06:54.452 "zone_append": false, 00:06:54.452 "compare": true, 00:06:54.452 "compare_and_write": true, 00:06:54.452 "abort": true, 00:06:54.452 "seek_hole": false, 00:06:54.452 "seek_data": false, 00:06:54.452 "copy": true, 00:06:54.452 "nvme_iov_md": false 00:06:54.452 }, 00:06:54.452 "memory_domains": [ 00:06:54.452 { 00:06:54.452 "dma_device_id": "system", 00:06:54.452 "dma_device_type": 1 00:06:54.452 } 00:06:54.452 ], 00:06:54.452 "driver_specific": { 00:06:54.452 "nvme": [ 00:06:54.452 { 00:06:54.452 "trid": { 00:06:54.452 "trtype": "TCP", 00:06:54.452 "adrfam": "IPv4", 00:06:54.452 "traddr": "10.0.0.2", 00:06:54.452 "trsvcid": "4420", 00:06:54.452 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:54.452 }, 00:06:54.452 "ctrlr_data": { 00:06:54.452 "cntlid": 1, 00:06:54.452 "vendor_id": "0x8086", 00:06:54.452 "model_number": "SPDK bdev Controller", 00:06:54.452 "serial_number": "SPDK0", 00:06:54.452 "firmware_revision": "24.09", 00:06:54.452 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:54.452 "oacs": { 00:06:54.452 "security": 0, 00:06:54.452 "format": 0, 00:06:54.452 "firmware": 0, 00:06:54.452 "ns_manage": 0 00:06:54.452 }, 00:06:54.452 "multi_ctrlr": true, 00:06:54.452 "ana_reporting": false 00:06:54.452 }, 00:06:54.452 "vs": { 00:06:54.452 "nvme_version": "1.3" 00:06:54.452 }, 00:06:54.452 "ns_data": { 00:06:54.452 "id": 1, 00:06:54.452 "can_share": true 00:06:54.452 } 00:06:54.452 } 00:06:54.452 ], 00:06:54.452 "mp_policy": "active_passive" 00:06:54.452 } 00:06:54.452 } 00:06:54.452 ] 00:06:54.711 18:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3060191 00:06:54.711 18:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:54.711 18:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:54.711 Running I/O for 10 seconds... 00:06:55.645 Latency(us) 00:06:55.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:55.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:55.645 Nvme0n1 : 1.00 14186.00 55.41 0.00 0.00 0.00 0.00 0.00 00:06:55.645 =================================================================================================================== 00:06:55.645 Total : 14186.00 55.41 0.00 0.00 0.00 0.00 0.00 00:06:55.645 00:06:56.579 18:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c4e826dc-5c59-4cbd-98ea-8967fd0acaff 00:06:56.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:56.837 Nvme0n1 : 2.00 14278.50 55.78 0.00 0.00 0.00 0.00 0.00 00:06:56.837 =================================================================================================================== 00:06:56.837 Total : 14278.50 55.78 0.00 0.00 0.00 0.00 0.00 00:06:56.837 00:06:56.837 true 00:06:56.837 18:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4e826dc-5c59-4cbd-98ea-8967fd0acaff 00:06:56.837 18:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:57.094 18:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:57.094 18:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:57.094 18:49:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3060191 00:06:57.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:57.660 Nvme0n1 : 3.00 14431.33 56.37 0.00 0.00 0.00 0.00 0.00 00:06:57.660 =================================================================================================================== 00:06:57.660 Total : 14431.33 56.37 0.00 0.00 0.00 0.00 0.00 00:06:57.660 00:06:58.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.608 Nvme0n1 : 4.00 14487.75 56.59 0.00 0.00 0.00 0.00 0.00 00:06:58.608 =================================================================================================================== 00:06:58.608 Total : 14487.75 56.59 0.00 0.00 0.00 0.00 0.00 00:06:58.608 00:06:59.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.981 Nvme0n1 : 5.00 14556.00 56.86 0.00 0.00 0.00 0.00 0.00 00:06:59.981 =================================================================================================================== 00:06:59.981 Total : 14556.00 56.86 0.00 0.00 0.00 0.00 0.00 00:06:59.981 00:07:00.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:00.914 Nvme0n1 : 6.00 14625.50 57.13 0.00 0.00 0.00 0.00 0.00 00:07:00.914 =================================================================================================================== 00:07:00.914 Total : 14625.50 57.13 0.00 0.00 0.00 0.00 0.00 00:07:00.914 00:07:01.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.848 Nvme0n1 : 7.00 14681.00 57.35 0.00 0.00 0.00 0.00 0.00 00:07:01.848 =================================================================================================================== 00:07:01.848 Total : 14681.00 57.35 0.00 0.00 0.00 0.00 0.00 00:07:01.848 00:07:02.781 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.781 Nvme0n1 : 8.00 14717.50 57.49 0.00 0.00 0.00 0.00 0.00 00:07:02.781 =================================================================================================================== 00:07:02.781 Total : 14717.50 57.49 0.00 0.00 0.00 0.00 0.00 00:07:02.781 00:07:03.714 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.714 Nvme0n1 : 9.00 14724.33 57.52 0.00 0.00 0.00 0.00 0.00 00:07:03.714 =================================================================================================================== 00:07:03.714 Total : 14724.33 57.52 0.00 0.00 0.00 0.00 0.00 00:07:03.714 00:07:04.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.647 Nvme0n1 : 10.00 14754.90 57.64 0.00 0.00 0.00 0.00 0.00 00:07:04.647 =================================================================================================================== 00:07:04.647 Total : 14754.90 57.64 0.00 0.00 0.00 0.00 0.00 00:07:04.647 00:07:04.647 00:07:04.647 Latency(us) 00:07:04.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:04.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.647 Nvme0n1 : 10.01 14758.45 57.65 0.00 0.00 8668.15 4975.88 20000.62 00:07:04.647 =================================================================================================================== 00:07:04.647 Total : 14758.45 57.65 0.00 0.00 8668.15 4975.88 20000.62 00:07:04.647 0 00:07:04.647 18:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3060051 00:07:04.647 18:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3060051 ']' 00:07:04.647 18:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3060051 00:07:04.647 18:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:07:04.647 18:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.647 18:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3060051 00:07:04.904 18:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:04.904 18:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:04.904 18:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3060051' 00:07:04.904 killing process with pid 3060051 00:07:04.904 18:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3060051 00:07:04.904 Received shutdown signal, test time was about 10.000000 seconds 00:07:04.904 00:07:04.904 Latency(us) 00:07:04.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:04.904 =================================================================================================================== 00:07:04.904 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:04.904 18:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3060051 00:07:05.162 18:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:05.419 18:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:05.677 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4e826dc-5c59-4cbd-98ea-8967fd0acaff 00:07:05.677 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:05.934 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:05.934 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:05.934 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3057426 00:07:05.934 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3057426 00:07:05.934 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3057426 Killed "${NVMF_APP[@]}" "$@" 00:07:05.934 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:05.934 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:05.934 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:05.934 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:05.934 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:05.934 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3061527 00:07:05.934 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:05.934 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3061527 00:07:05.934 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3061527 ']' 00:07:05.934 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.934 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.934 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.934 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.934 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:05.934 [2024-07-24 18:49:43.396910] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:07:05.934 [2024-07-24 18:49:43.396999] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.934 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.934 [2024-07-24 18:49:43.463732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.191 [2024-07-24 18:49:43.574115] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.191 [2024-07-24 18:49:43.574187] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.191 [2024-07-24 18:49:43.574215] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.191 [2024-07-24 18:49:43.574226] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.191 [2024-07-24 18:49:43.574236] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.191 [2024-07-24 18:49:43.574283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.191 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.191 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:06.191 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:06.191 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:06.191 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:06.191 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.191 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:06.448 [2024-07-24 18:49:43.971660] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:06.448 [2024-07-24 18:49:43.971794] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:06.448 [2024-07-24 18:49:43.971852] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:06.448 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:06.448 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev eb822a83-8d99-4e97-a1dd-11f9f712468e 00:07:06.448 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=eb822a83-8d99-4e97-a1dd-11f9f712468e 00:07:06.448 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:06.448 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:06.448 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:06.448 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:06.448 18:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:06.706 18:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b eb822a83-8d99-4e97-a1dd-11f9f712468e -t 2000 00:07:06.965 [ 00:07:06.965 { 00:07:06.965 "name": "eb822a83-8d99-4e97-a1dd-11f9f712468e", 00:07:06.965 "aliases": [ 00:07:06.965 "lvs/lvol" 00:07:06.965 ], 00:07:06.965 "product_name": "Logical Volume", 00:07:06.965 "block_size": 4096, 00:07:06.965 "num_blocks": 38912, 00:07:06.965 "uuid": "eb822a83-8d99-4e97-a1dd-11f9f712468e", 00:07:06.965 "assigned_rate_limits": { 00:07:06.965 "rw_ios_per_sec": 0, 00:07:06.965 "rw_mbytes_per_sec": 0, 00:07:06.965 "r_mbytes_per_sec": 0, 00:07:06.965 "w_mbytes_per_sec": 0 00:07:06.965 }, 00:07:06.965 "claimed": false, 00:07:06.965 "zoned": false, 00:07:06.965 "supported_io_types": { 00:07:06.965 "read": true, 00:07:06.965 "write": true, 00:07:06.965 "unmap": true, 00:07:06.965 "flush": false, 00:07:06.965 "reset": true, 00:07:06.965 "nvme_admin": false, 00:07:06.965 "nvme_io": false, 00:07:06.965 "nvme_io_md": false, 00:07:06.965 "write_zeroes": true, 00:07:06.965 "zcopy": false, 00:07:06.965 "get_zone_info": false, 00:07:06.965 "zone_management": false, 00:07:06.965 "zone_append": false, 00:07:06.965 "compare": false, 00:07:06.965 "compare_and_write": false, 00:07:06.965 "abort": false, 00:07:06.965 "seek_hole": true, 00:07:06.965 "seek_data": true, 00:07:06.965 "copy": false, 00:07:06.965 "nvme_iov_md": false 00:07:06.965 }, 00:07:06.965 "driver_specific": { 00:07:06.965 "lvol": { 00:07:06.965 "lvol_store_uuid": "c4e826dc-5c59-4cbd-98ea-8967fd0acaff", 00:07:06.965 "base_bdev": "aio_bdev", 00:07:06.965 "thin_provision": false, 00:07:06.965 "num_allocated_clusters": 38, 00:07:06.965 "snapshot": false, 00:07:06.965 "clone": false, 00:07:06.965 "esnap_clone": false 00:07:06.965 } 00:07:06.965 } 00:07:06.965 } 00:07:06.965 ] 00:07:06.965 18:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:06.965 18:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4e826dc-5c59-4cbd-98ea-8967fd0acaff 00:07:06.965 18:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:07.223 18:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:07.223 18:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4e826dc-5c59-4cbd-98ea-8967fd0acaff 00:07:07.223 18:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:07.501 18:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:07.501 18:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:07.759 [2024-07-24 18:49:45.232435] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:07.759 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4e826dc-5c59-4cbd-98ea-8967fd0acaff 00:07:07.759 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:07.759 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4e826dc-5c59-4cbd-98ea-8967fd0acaff 00:07:07.759 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.759 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.759 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.759 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.759 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.759 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.759 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.759 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:07.759 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4e826dc-5c59-4cbd-98ea-8967fd0acaff 00:07:08.017 request: 00:07:08.017 { 00:07:08.017 "uuid": "c4e826dc-5c59-4cbd-98ea-8967fd0acaff", 00:07:08.017 "method": "bdev_lvol_get_lvstores", 00:07:08.017 "req_id": 1 00:07:08.017 } 00:07:08.017 Got JSON-RPC error response 00:07:08.017 response: 00:07:08.017 { 00:07:08.017 "code": -19, 00:07:08.017 "message": "No such device" 00:07:08.017 } 00:07:08.017 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:08.017 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:08.017 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:08.017 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:08.017 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:08.275 aio_bdev 00:07:08.275 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev eb822a83-8d99-4e97-a1dd-11f9f712468e 00:07:08.275 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=eb822a83-8d99-4e97-a1dd-11f9f712468e 00:07:08.275 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:08.275 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:08.275 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:08.275 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:08.275 18:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:08.532 18:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b eb822a83-8d99-4e97-a1dd-11f9f712468e -t 2000 00:07:08.790 [ 00:07:08.790 { 00:07:08.790 "name": "eb822a83-8d99-4e97-a1dd-11f9f712468e", 00:07:08.790 "aliases": [ 00:07:08.790 "lvs/lvol" 00:07:08.790 ], 00:07:08.790 "product_name": "Logical Volume", 00:07:08.790 "block_size": 4096, 00:07:08.790 "num_blocks": 38912, 00:07:08.790 "uuid": "eb822a83-8d99-4e97-a1dd-11f9f712468e", 00:07:08.790 "assigned_rate_limits": { 00:07:08.790 "rw_ios_per_sec": 0, 00:07:08.790 "rw_mbytes_per_sec": 0, 00:07:08.790 "r_mbytes_per_sec": 0, 00:07:08.790 "w_mbytes_per_sec": 0 00:07:08.790 }, 00:07:08.790 "claimed": false, 00:07:08.790 "zoned": false, 00:07:08.790 "supported_io_types": { 00:07:08.790 "read": true, 00:07:08.790 "write": true, 00:07:08.790 "unmap": true, 00:07:08.790 "flush": false, 00:07:08.790 "reset": true, 00:07:08.790 "nvme_admin": false, 00:07:08.790 "nvme_io": false, 00:07:08.790 "nvme_io_md": false, 00:07:08.790 "write_zeroes": true, 00:07:08.790 "zcopy": false, 00:07:08.790 "get_zone_info": false, 00:07:08.790 "zone_management": false, 00:07:08.790 "zone_append": false, 00:07:08.790 "compare": false, 00:07:08.790 "compare_and_write": false, 00:07:08.790 "abort": false, 00:07:08.790 "seek_hole": true, 00:07:08.790 "seek_data": true, 00:07:08.790 "copy": false, 00:07:08.790 "nvme_iov_md": false 00:07:08.790 }, 00:07:08.790 "driver_specific": { 00:07:08.790 "lvol": { 00:07:08.790 "lvol_store_uuid": "c4e826dc-5c59-4cbd-98ea-8967fd0acaff", 00:07:08.790 "base_bdev": "aio_bdev", 00:07:08.790 "thin_provision": false, 00:07:08.790 "num_allocated_clusters": 38, 00:07:08.790 "snapshot": false, 00:07:08.790 "clone": false, 00:07:08.790 "esnap_clone": false 00:07:08.790 } 00:07:08.790 } 00:07:08.790 } 00:07:08.790 ] 00:07:08.790 18:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:08.790 18:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4e826dc-5c59-4cbd-98ea-8967fd0acaff 00:07:08.790 18:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:09.048 18:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:09.048 18:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4e826dc-5c59-4cbd-98ea-8967fd0acaff 00:07:09.048 18:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:09.306 18:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:09.306 18:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete eb822a83-8d99-4e97-a1dd-11f9f712468e 00:07:09.564 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c4e826dc-5c59-4cbd-98ea-8967fd0acaff 00:07:09.821 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:10.078 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:10.078 00:07:10.078 real 0m19.783s 00:07:10.078 user 0m50.102s 00:07:10.078 sys 0m4.665s 00:07:10.078 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.078 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:10.078 ************************************ 00:07:10.078 END TEST lvs_grow_dirty 00:07:10.078 ************************************ 00:07:10.078 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:10.079 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:07:10.079 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:07:10.079 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:07:10.079 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:10.079 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:07:10.079 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:07:10.079 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:07:10.079 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:10.079 nvmf_trace.0 00:07:10.079 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:07:10.079 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:10.079 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:10.079 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:07:10.079 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:10.079 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:07:10.079 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:10.079 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:10.079 rmmod nvme_tcp 00:07:10.079 rmmod nvme_fabrics 00:07:10.365 rmmod nvme_keyring 00:07:10.365 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:10.365 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:07:10.365 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:07:10.365 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3061527 ']' 00:07:10.365 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3061527 00:07:10.365 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3061527 ']' 00:07:10.365 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3061527 00:07:10.365 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:07:10.365 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.365 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3061527 00:07:10.365 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.366 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.366 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3061527' 00:07:10.366 killing process with pid 3061527 00:07:10.366 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3061527 00:07:10.366 18:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3061527 00:07:10.624 18:49:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:10.624 18:49:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:10.624 18:49:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:10.624 18:49:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:10.624 18:49:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:10.624 18:49:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.624 18:49:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.624 18:49:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.523 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:12.523 00:07:12.523 real 0m42.834s 00:07:12.523 user 1m12.881s 00:07:12.523 sys 0m8.509s 00:07:12.523 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.523 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:12.523 ************************************ 00:07:12.523 END TEST nvmf_lvs_grow 00:07:12.523 ************************************ 00:07:12.523 18:49:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:12.523 18:49:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:12.523 18:49:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.523 18:49:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:12.781 ************************************ 00:07:12.781 START TEST nvmf_bdev_io_wait 00:07:12.781 ************************************ 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:12.781 * Looking for test storage... 00:07:12.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.781 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:07:12.782 18:49:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:14.681 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:14.681 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:07:14.681 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:14.681 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:14.681 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:14.681 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:14.681 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:14.681 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:07:14.681 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:14.681 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:07:14.681 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:07:14.681 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:07:14.681 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:07:14.681 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:14.682 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:14.682 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:14.682 Found net devices under 0000:09:00.0: cvl_0_0 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:14.682 Found net devices under 0000:09:00.1: cvl_0_1 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:14.682 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:14.940 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:14.940 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:14.940 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:14.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:14.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:07:14.940 00:07:14.940 --- 10.0.0.2 ping statistics --- 00:07:14.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.940 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:07:14.940 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:14.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:14.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:07:14.941 00:07:14.941 --- 10.0.0.1 ping statistics --- 00:07:14.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.941 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:07:14.941 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:14.941 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:07:14.941 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:14.941 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:14.941 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:14.941 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:14.941 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:14.941 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:14.941 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:14.941 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:14.941 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:14.941 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:14.941 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:14.941 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3064059 00:07:14.941 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:14.941 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3064059 00:07:14.941 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3064059 ']' 00:07:14.941 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.941 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.941 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.941 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.941 18:49:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:14.941 [2024-07-24 18:49:52.403888] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:07:14.941 [2024-07-24 18:49:52.403966] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.941 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.941 [2024-07-24 18:49:52.467434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:15.199 [2024-07-24 18:49:52.579225] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.199 [2024-07-24 18:49:52.579278] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.199 [2024-07-24 18:49:52.579306] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.199 [2024-07-24 18:49:52.579322] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.199 [2024-07-24 18:49:52.579332] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.199 [2024-07-24 18:49:52.579397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.199 [2024-07-24 18:49:52.579438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.199 [2024-07-24 18:49:52.579492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.199 [2024-07-24 18:49:52.579495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.764 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.764 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:07:15.764 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:15.764 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:15.764 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:15.764 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.764 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:15.764 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.764 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:16.022 [2024-07-24 18:49:53.448814] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:16.022 Malloc0 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:16.022 [2024-07-24 18:49:53.510685] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3064218 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3064222 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:16.022 { 00:07:16.022 "params": { 00:07:16.022 "name": "Nvme$subsystem", 00:07:16.022 "trtype": "$TEST_TRANSPORT", 00:07:16.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:16.022 "adrfam": "ipv4", 00:07:16.022 "trsvcid": "$NVMF_PORT", 00:07:16.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:16.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:16.022 "hdgst": ${hdgst:-false}, 00:07:16.022 "ddgst": ${ddgst:-false} 00:07:16.022 }, 00:07:16.022 "method": "bdev_nvme_attach_controller" 00:07:16.022 } 00:07:16.022 EOF 00:07:16.022 )") 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3064224 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:16.022 { 00:07:16.022 "params": { 00:07:16.022 "name": "Nvme$subsystem", 00:07:16.022 "trtype": "$TEST_TRANSPORT", 00:07:16.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:16.022 "adrfam": "ipv4", 00:07:16.022 "trsvcid": "$NVMF_PORT", 00:07:16.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:16.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:16.022 "hdgst": ${hdgst:-false}, 00:07:16.022 "ddgst": ${ddgst:-false} 00:07:16.022 }, 00:07:16.022 "method": "bdev_nvme_attach_controller" 00:07:16.022 } 00:07:16.022 EOF 00:07:16.022 )") 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3064227 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:16.022 { 00:07:16.022 "params": { 00:07:16.022 "name": "Nvme$subsystem", 00:07:16.022 "trtype": "$TEST_TRANSPORT", 00:07:16.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:16.022 "adrfam": "ipv4", 00:07:16.022 "trsvcid": "$NVMF_PORT", 00:07:16.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:16.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:16.022 "hdgst": ${hdgst:-false}, 00:07:16.022 "ddgst": ${ddgst:-false} 00:07:16.022 }, 00:07:16.022 "method": "bdev_nvme_attach_controller" 00:07:16.022 } 00:07:16.022 EOF 00:07:16.022 )") 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:16.022 { 00:07:16.022 "params": { 00:07:16.022 "name": "Nvme$subsystem", 00:07:16.022 "trtype": "$TEST_TRANSPORT", 00:07:16.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:16.022 "adrfam": "ipv4", 00:07:16.022 "trsvcid": "$NVMF_PORT", 00:07:16.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:16.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:16.022 "hdgst": ${hdgst:-false}, 00:07:16.022 "ddgst": ${ddgst:-false} 00:07:16.022 }, 00:07:16.022 "method": "bdev_nvme_attach_controller" 00:07:16.022 } 00:07:16.022 EOF 00:07:16.022 )") 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3064218 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:16.022 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:16.022 "params": { 00:07:16.022 "name": "Nvme1", 00:07:16.022 "trtype": "tcp", 00:07:16.022 "traddr": "10.0.0.2", 00:07:16.022 "adrfam": "ipv4", 00:07:16.022 "trsvcid": "4420", 00:07:16.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:16.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:16.022 "hdgst": false, 00:07:16.022 "ddgst": false 00:07:16.023 }, 00:07:16.023 "method": "bdev_nvme_attach_controller" 00:07:16.023 }' 00:07:16.023 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:16.023 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:16.023 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:16.023 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:16.023 "params": { 00:07:16.023 "name": "Nvme1", 00:07:16.023 "trtype": "tcp", 00:07:16.023 "traddr": "10.0.0.2", 00:07:16.023 "adrfam": "ipv4", 00:07:16.023 "trsvcid": "4420", 00:07:16.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:16.023 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:16.023 "hdgst": false, 00:07:16.023 "ddgst": false 00:07:16.023 }, 00:07:16.023 "method": "bdev_nvme_attach_controller" 00:07:16.023 }' 00:07:16.023 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:16.023 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:16.023 "params": { 00:07:16.023 "name": "Nvme1", 00:07:16.023 "trtype": "tcp", 00:07:16.023 "traddr": "10.0.0.2", 00:07:16.023 "adrfam": "ipv4", 00:07:16.023 "trsvcid": "4420", 00:07:16.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:16.023 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:16.023 "hdgst": false, 00:07:16.023 "ddgst": false 00:07:16.023 }, 00:07:16.023 "method": "bdev_nvme_attach_controller" 00:07:16.023 }' 00:07:16.023 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:16.023 18:49:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:16.023 "params": { 00:07:16.023 "name": "Nvme1", 00:07:16.023 "trtype": "tcp", 00:07:16.023 "traddr": "10.0.0.2", 00:07:16.023 "adrfam": "ipv4", 00:07:16.023 "trsvcid": "4420", 00:07:16.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:16.023 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:16.023 "hdgst": false, 00:07:16.023 "ddgst": false 00:07:16.023 }, 00:07:16.023 "method": "bdev_nvme_attach_controller" 00:07:16.023 }' 00:07:16.023 [2024-07-24 18:49:53.557609] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:07:16.023 [2024-07-24 18:49:53.557618] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:07:16.023 [2024-07-24 18:49:53.557617] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:07:16.023 [2024-07-24 18:49:53.557689] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:16.023 [2024-07-24 18:49:53.557694] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 18:49:53.557699] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:16.023 --proc-type=auto ] 00:07:16.023 [2024-07-24 18:49:53.557844] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:07:16.023 [2024-07-24 18:49:53.557897] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:16.023 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.280 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.280 [2024-07-24 18:49:53.737206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.280 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.281 [2024-07-24 18:49:53.839945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:16.281 [2024-07-24 18:49:53.843831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.538 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.538 [2024-07-24 18:49:53.947812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:07:16.538 [2024-07-24 18:49:53.952666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.538 [2024-07-24 18:49:54.029274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.538 [2024-07-24 18:49:54.056835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:07:16.538 [2024-07-24 18:49:54.122242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:07:16.796 Running I/O for 1 seconds... 00:07:16.796 Running I/O for 1 seconds... 00:07:16.796 Running I/O for 1 seconds... 00:07:16.796 Running I/O for 1 seconds... 00:07:17.730 00:07:17.730 Latency(us) 00:07:17.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.730 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:17.730 Nvme1n1 : 1.01 12192.84 47.63 0.00 0.00 10462.57 5558.42 19029.71 00:07:17.730 =================================================================================================================== 00:07:17.730 Total : 12192.84 47.63 0.00 0.00 10462.57 5558.42 19029.71 00:07:17.730 00:07:17.730 Latency(us) 00:07:17.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.730 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:17.730 Nvme1n1 : 1.00 193637.91 756.40 0.00 0.00 658.38 259.41 813.13 00:07:17.730 =================================================================================================================== 00:07:17.730 Total : 193637.91 756.40 0.00 0.00 658.38 259.41 813.13 00:07:17.730 00:07:17.730 Latency(us) 00:07:17.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.730 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:17.730 Nvme1n1 : 1.05 4375.78 17.09 0.00 0.00 28782.60 13301.38 58254.22 00:07:17.730 =================================================================================================================== 00:07:17.730 Total : 4375.78 17.09 0.00 0.00 28782.60 13301.38 58254.22 00:07:17.730 00:07:17.730 Latency(us) 00:07:17.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.730 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:17.730 Nvme1n1 : 1.01 9108.74 35.58 0.00 0.00 13991.83 7427.41 25631.86 00:07:17.730 =================================================================================================================== 00:07:17.730 Total : 9108.74 35.58 0.00 0.00 13991.83 7427.41 25631.86 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3064222 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3064224 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3064227 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:18.295 rmmod nvme_tcp 00:07:18.295 rmmod nvme_fabrics 00:07:18.295 rmmod nvme_keyring 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3064059 ']' 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3064059 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3064059 ']' 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3064059 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3064059 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3064059' 00:07:18.295 killing process with pid 3064059 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3064059 00:07:18.295 18:49:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3064059 00:07:18.554 18:49:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:18.554 18:49:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:18.554 18:49:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:18.554 18:49:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:18.554 18:49:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:18.554 18:49:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.554 18:49:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.554 18:49:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.087 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:21.087 00:07:21.087 real 0m8.036s 00:07:21.087 user 0m18.878s 00:07:21.087 sys 0m3.875s 00:07:21.087 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.087 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:21.087 ************************************ 00:07:21.087 END TEST nvmf_bdev_io_wait 00:07:21.087 ************************************ 00:07:21.087 18:49:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:21.087 18:49:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:21.087 18:49:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.087 18:49:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:21.087 ************************************ 00:07:21.087 START TEST nvmf_queue_depth 00:07:21.087 ************************************ 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:21.088 * Looking for test storage... 00:07:21.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:07:21.088 18:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:22.987 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:22.987 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:22.988 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:22.988 Found net devices under 0000:09:00.0: cvl_0_0 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:22.988 Found net devices under 0000:09:00.1: cvl_0_1 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:22.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:22.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:07:22.988 00:07:22.988 --- 10.0.0.2 ping statistics --- 00:07:22.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.988 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:22.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:22.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:07:22.988 00:07:22.988 --- 10.0.0.1 ping statistics --- 00:07:22.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.988 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3066444 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3066444 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3066444 ']' 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.988 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:22.988 [2024-07-24 18:50:00.502339] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:07:22.988 [2024-07-24 18:50:00.502405] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.988 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.988 [2024-07-24 18:50:00.569325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.247 [2024-07-24 18:50:00.688664] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:23.247 [2024-07-24 18:50:00.688729] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:23.247 [2024-07-24 18:50:00.688746] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:23.247 [2024-07-24 18:50:00.688760] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:23.247 [2024-07-24 18:50:00.688771] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:23.247 [2024-07-24 18:50:00.688801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.247 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.247 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:23.247 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:23.247 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:23.247 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:23.247 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.247 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:23.247 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.247 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:23.247 [2024-07-24 18:50:00.840999] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.247 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.247 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:23.247 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.247 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:23.504 Malloc0 00:07:23.504 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.504 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:23.504 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.504 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:23.504 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.504 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:23.504 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.504 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:23.504 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.504 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.504 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.504 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:23.504 [2024-07-24 18:50:00.900797] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.504 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.504 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3066589 00:07:23.504 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:23.504 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3066589 /var/tmp/bdevperf.sock 00:07:23.505 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:23.505 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3066589 ']' 00:07:23.505 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:23.505 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.505 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:23.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:23.505 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.505 18:50:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:23.505 [2024-07-24 18:50:00.948687] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:07:23.505 [2024-07-24 18:50:00.948775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3066589 ] 00:07:23.505 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.505 [2024-07-24 18:50:01.006748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.762 [2024-07-24 18:50:01.114948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.762 18:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.762 18:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:23.762 18:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:23.762 18:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.762 18:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:24.020 NVMe0n1 00:07:24.020 18:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.020 18:50:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:24.020 Running I/O for 10 seconds... 00:07:36.210 00:07:36.210 Latency(us) 00:07:36.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.210 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:36.210 Verification LBA range: start 0x0 length 0x4000 00:07:36.210 NVMe0n1 : 10.08 8564.78 33.46 0.00 0.00 118965.51 21359.88 73788.68 00:07:36.210 =================================================================================================================== 00:07:36.210 Total : 8564.78 33.46 0.00 0.00 118965.51 21359.88 73788.68 00:07:36.210 0 00:07:36.210 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3066589 00:07:36.210 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3066589 ']' 00:07:36.210 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3066589 00:07:36.210 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:07:36.210 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:36.210 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3066589 00:07:36.210 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:36.210 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:36.210 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3066589' 00:07:36.210 killing process with pid 3066589 00:07:36.210 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3066589 00:07:36.210 Received shutdown signal, test time was about 10.000000 seconds 00:07:36.210 00:07:36.210 Latency(us) 00:07:36.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.210 =================================================================================================================== 00:07:36.210 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:36.210 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3066589 00:07:36.211 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:36.211 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:36.211 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:36.211 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:07:36.211 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:36.211 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:07:36.211 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:36.211 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:36.211 rmmod nvme_tcp 00:07:36.211 rmmod nvme_fabrics 00:07:36.211 rmmod nvme_keyring 00:07:36.211 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:36.211 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:07:36.211 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:07:36.211 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3066444 ']' 00:07:36.211 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3066444 00:07:36.211 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3066444 ']' 00:07:36.211 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3066444 00:07:36.211 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:07:36.211 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:36.211 18:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3066444 00:07:36.211 18:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:36.211 18:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:36.211 18:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3066444' 00:07:36.211 killing process with pid 3066444 00:07:36.211 18:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3066444 00:07:36.211 18:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3066444 00:07:36.211 18:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:36.211 18:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:36.211 18:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:36.211 18:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:36.211 18:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:36.211 18:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.211 18:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:36.211 18:50:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.776 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:36.776 00:07:36.776 real 0m16.133s 00:07:36.776 user 0m22.724s 00:07:36.776 sys 0m3.051s 00:07:36.776 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.776 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:36.776 ************************************ 00:07:36.776 END TEST nvmf_queue_depth 00:07:36.776 ************************************ 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:37.034 ************************************ 00:07:37.034 START TEST nvmf_target_multipath 00:07:37.034 ************************************ 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:37.034 * Looking for test storage... 00:07:37.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.034 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:07:37.035 18:50:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:38.938 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:38.938 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:07:38.938 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:38.938 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:38.938 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:38.939 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:38.939 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:38.939 Found net devices under 0000:09:00.0: cvl_0_0 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:38.939 Found net devices under 0000:09:00.1: cvl_0_1 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:38.939 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:38.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:07:38.940 00:07:38.940 --- 10.0.0.2 ping statistics --- 00:07:38.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.940 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:38.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:38.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:07:38.940 00:07:38.940 --- 10.0.0.1 ping statistics --- 00:07:38.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.940 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:38.940 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:39.241 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:07:39.241 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:07:39.241 only one NIC for nvmf test 00:07:39.241 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:07:39.241 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:39.241 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:07:39.241 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:39.241 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:07:39.241 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:39.241 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:39.241 rmmod nvme_tcp 00:07:39.241 rmmod nvme_fabrics 00:07:39.241 rmmod nvme_keyring 00:07:39.241 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:39.241 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:07:39.241 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:07:39.241 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:39.241 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:39.241 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:39.241 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:39.241 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:39.241 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:39.241 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.241 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.241 18:50:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:41.153 00:07:41.153 real 0m4.242s 00:07:41.153 user 0m0.822s 00:07:41.153 sys 0m1.397s 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:41.153 ************************************ 00:07:41.153 END TEST nvmf_target_multipath 00:07:41.153 ************************************ 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:41.153 ************************************ 00:07:41.153 START TEST nvmf_zcopy 00:07:41.153 ************************************ 00:07:41.153 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:41.412 * Looking for test storage... 00:07:41.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:07:41.412 18:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:43.314 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:43.314 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:43.314 Found net devices under 0000:09:00.0: cvl_0_0 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:43.314 Found net devices under 0000:09:00.1: cvl_0_1 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:43.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:43.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:07:43.314 00:07:43.314 --- 10.0.0.2 ping statistics --- 00:07:43.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.314 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:43.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:43.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:07:43.314 00:07:43.314 --- 10.0.0.1 ping statistics --- 00:07:43.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.314 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:43.314 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:43.315 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:43.315 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:43.315 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:43.315 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:07:43.315 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:43.315 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:43.315 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:43.315 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3071655 00:07:43.315 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:43.315 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3071655 00:07:43.315 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3071655 ']' 00:07:43.315 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.315 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.315 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.315 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.315 18:50:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:43.573 [2024-07-24 18:50:20.921306] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:07:43.573 [2024-07-24 18:50:20.921377] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.573 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.573 [2024-07-24 18:50:20.986815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.573 [2024-07-24 18:50:21.095461] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.573 [2024-07-24 18:50:21.095514] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.573 [2024-07-24 18:50:21.095527] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.573 [2024-07-24 18:50:21.095539] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.573 [2024-07-24 18:50:21.095555] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.573 [2024-07-24 18:50:21.095582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:43.831 [2024-07-24 18:50:21.245258] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:43.831 [2024-07-24 18:50:21.261451] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:43.831 malloc0 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:43.831 { 00:07:43.831 "params": { 00:07:43.831 "name": "Nvme$subsystem", 00:07:43.831 "trtype": "$TEST_TRANSPORT", 00:07:43.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:43.831 "adrfam": "ipv4", 00:07:43.831 "trsvcid": "$NVMF_PORT", 00:07:43.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:43.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:43.831 "hdgst": ${hdgst:-false}, 00:07:43.831 "ddgst": ${ddgst:-false} 00:07:43.831 }, 00:07:43.831 "method": "bdev_nvme_attach_controller" 00:07:43.831 } 00:07:43.831 EOF 00:07:43.831 )") 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:07:43.831 18:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:43.831 "params": { 00:07:43.831 "name": "Nvme1", 00:07:43.831 "trtype": "tcp", 00:07:43.831 "traddr": "10.0.0.2", 00:07:43.831 "adrfam": "ipv4", 00:07:43.831 "trsvcid": "4420", 00:07:43.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:43.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:43.831 "hdgst": false, 00:07:43.831 "ddgst": false 00:07:43.831 }, 00:07:43.831 "method": "bdev_nvme_attach_controller" 00:07:43.831 }' 00:07:43.831 [2024-07-24 18:50:21.349867] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:07:43.831 [2024-07-24 18:50:21.349950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071797 ] 00:07:43.831 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.831 [2024-07-24 18:50:21.413230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.090 [2024-07-24 18:50:21.532477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.348 Running I/O for 10 seconds... 00:07:54.310 00:07:54.310 Latency(us) 00:07:54.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.310 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:07:54.310 Verification LBA range: start 0x0 length 0x1000 00:07:54.310 Nvme1n1 : 10.02 5613.48 43.86 0.00 0.00 22740.30 3883.61 31845.64 00:07:54.310 =================================================================================================================== 00:07:54.310 Total : 5613.48 43.86 0.00 0.00 22740.30 3883.61 31845.64 00:07:54.567 18:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3072998 00:07:54.567 18:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:07:54.567 18:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:54.567 18:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:07:54.567 18:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:07:54.567 18:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:07:54.567 18:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:07:54.567 18:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:54.567 18:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:54.567 { 00:07:54.567 "params": { 00:07:54.567 "name": "Nvme$subsystem", 00:07:54.567 "trtype": "$TEST_TRANSPORT", 00:07:54.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:54.567 "adrfam": "ipv4", 00:07:54.567 "trsvcid": "$NVMF_PORT", 00:07:54.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:54.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:54.567 "hdgst": ${hdgst:-false}, 00:07:54.567 "ddgst": ${ddgst:-false} 00:07:54.567 }, 00:07:54.568 "method": "bdev_nvme_attach_controller" 00:07:54.568 } 00:07:54.568 EOF 00:07:54.568 )") 00:07:54.568 18:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:07:54.568 [2024-07-24 18:50:32.063337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.568 [2024-07-24 18:50:32.063393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.568 18:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:07:54.568 18:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:07:54.568 18:50:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:54.568 "params": { 00:07:54.568 "name": "Nvme1", 00:07:54.568 "trtype": "tcp", 00:07:54.568 "traddr": "10.0.0.2", 00:07:54.568 "adrfam": "ipv4", 00:07:54.568 "trsvcid": "4420", 00:07:54.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:54.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:54.568 "hdgst": false, 00:07:54.568 "ddgst": false 00:07:54.568 }, 00:07:54.568 "method": "bdev_nvme_attach_controller" 00:07:54.568 }' 00:07:54.568 [2024-07-24 18:50:32.071301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.568 [2024-07-24 18:50:32.071326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.568 [2024-07-24 18:50:32.079319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.568 [2024-07-24 18:50:32.079342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.568 [2024-07-24 18:50:32.087338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.568 [2024-07-24 18:50:32.087360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.568 [2024-07-24 18:50:32.095361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.568 [2024-07-24 18:50:32.095398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.568 [2024-07-24 18:50:32.100218] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:07:54.568 [2024-07-24 18:50:32.100280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072998 ] 00:07:54.568 [2024-07-24 18:50:32.103399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.568 [2024-07-24 18:50:32.103420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.568 [2024-07-24 18:50:32.111419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.568 [2024-07-24 18:50:32.111440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.568 [2024-07-24 18:50:32.119440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.568 [2024-07-24 18:50:32.119475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.568 [2024-07-24 18:50:32.127476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.568 [2024-07-24 18:50:32.127496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.568 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.568 [2024-07-24 18:50:32.135483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.568 [2024-07-24 18:50:32.135508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.568 [2024-07-24 18:50:32.143505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.568 [2024-07-24 18:50:32.143530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.568 [2024-07-24 18:50:32.151527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.568 [2024-07-24 18:50:32.151552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.568 [2024-07-24 18:50:32.159550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.568 [2024-07-24 18:50:32.159574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.568 [2024-07-24 18:50:32.162777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.568 [2024-07-24 18:50:32.167587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.568 [2024-07-24 18:50:32.167623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.175627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.175664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.183621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.183647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.191640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.191666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.199663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.199689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.207686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.207711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.215707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.215733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.223730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.223755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.231776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.231809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.239791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.239822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.247797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.247822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.255820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.255845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.263841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.263866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.271864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.271889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.279886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.279911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.285050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.826 [2024-07-24 18:50:32.287909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.287934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.295928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.295953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.303974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.304008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.311997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.312033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.320015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.320050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.328037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.328075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.336070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.336117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.344085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.344130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.352109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.352167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.360116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.360167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.368172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.368205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.376196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.376230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.384188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.384211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.392203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.392224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.400253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.400278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.408281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.408306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.416297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.416320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.826 [2024-07-24 18:50:32.424340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.826 [2024-07-24 18:50:32.424390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.084 [2024-07-24 18:50:32.432341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.084 [2024-07-24 18:50:32.432375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.084 [2024-07-24 18:50:32.440364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.084 [2024-07-24 18:50:32.440402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.084 [2024-07-24 18:50:32.448403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.084 [2024-07-24 18:50:32.448428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.084 [2024-07-24 18:50:32.456423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.084 [2024-07-24 18:50:32.456444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.084 [2024-07-24 18:50:32.464459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.084 [2024-07-24 18:50:32.464486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.084 [2024-07-24 18:50:32.472481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.084 [2024-07-24 18:50:32.472509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.084 [2024-07-24 18:50:32.480512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.084 [2024-07-24 18:50:32.480540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.084 [2024-07-24 18:50:32.488538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.084 [2024-07-24 18:50:32.488565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.084 [2024-07-24 18:50:32.496545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.084 [2024-07-24 18:50:32.496570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.084 [2024-07-24 18:50:32.504571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.084 [2024-07-24 18:50:32.504601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.084 Running I/O for 5 seconds... 00:07:55.084 [2024-07-24 18:50:32.512662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.084 [2024-07-24 18:50:32.512690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.084 [2024-07-24 18:50:32.525674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.084 [2024-07-24 18:50:32.525707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.084 [2024-07-24 18:50:32.541629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.084 [2024-07-24 18:50:32.541661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.084 [2024-07-24 18:50:32.560858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.084 [2024-07-24 18:50:32.560890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.084 [2024-07-24 18:50:32.578326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.084 [2024-07-24 18:50:32.578355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.084 [2024-07-24 18:50:32.596082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.084 [2024-07-24 18:50:32.596123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.084 [2024-07-24 18:50:32.613532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.084 [2024-07-24 18:50:32.613563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.084 [2024-07-24 18:50:32.631994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.084 [2024-07-24 18:50:32.632026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.084 [2024-07-24 18:50:32.650685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.084 [2024-07-24 18:50:32.650717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.084 [2024-07-24 18:50:32.669000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.084 [2024-07-24 18:50:32.669031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.342 [2024-07-24 18:50:32.687550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.342 [2024-07-24 18:50:32.687582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.342 [2024-07-24 18:50:32.705010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.342 [2024-07-24 18:50:32.705041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.342 [2024-07-24 18:50:32.722424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.342 [2024-07-24 18:50:32.722455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.342 [2024-07-24 18:50:32.739485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.342 [2024-07-24 18:50:32.739531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.342 [2024-07-24 18:50:32.757196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.342 [2024-07-24 18:50:32.757231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.342 [2024-07-24 18:50:32.773666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.342 [2024-07-24 18:50:32.773694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.342 [2024-07-24 18:50:32.790679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.342 [2024-07-24 18:50:32.790707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.342 [2024-07-24 18:50:32.807377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.342 [2024-07-24 18:50:32.807421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.342 [2024-07-24 18:50:32.824219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.342 [2024-07-24 18:50:32.824260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.342 [2024-07-24 18:50:32.840272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.342 [2024-07-24 18:50:32.840300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.342 [2024-07-24 18:50:32.855337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.342 [2024-07-24 18:50:32.855365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.342 [2024-07-24 18:50:32.872377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.342 [2024-07-24 18:50:32.872419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.342 [2024-07-24 18:50:32.888939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.342 [2024-07-24 18:50:32.888966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.342 [2024-07-24 18:50:32.907205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.342 [2024-07-24 18:50:32.907232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.342 [2024-07-24 18:50:32.924252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.342 [2024-07-24 18:50:32.924280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.342 [2024-07-24 18:50:32.940959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.342 [2024-07-24 18:50:32.940987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.600 [2024-07-24 18:50:32.958873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.600 [2024-07-24 18:50:32.958900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.600 [2024-07-24 18:50:32.976167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.600 [2024-07-24 18:50:32.976195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.600 [2024-07-24 18:50:32.992681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.600 [2024-07-24 18:50:32.992723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.600 [2024-07-24 18:50:33.009164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.600 [2024-07-24 18:50:33.009197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.600 [2024-07-24 18:50:33.025661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.600 [2024-07-24 18:50:33.025689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.600 [2024-07-24 18:50:33.043786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.600 [2024-07-24 18:50:33.043832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.600 [2024-07-24 18:50:33.061218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.600 [2024-07-24 18:50:33.061260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.600 [2024-07-24 18:50:33.078805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.600 [2024-07-24 18:50:33.078836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.600 [2024-07-24 18:50:33.097653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.600 [2024-07-24 18:50:33.097685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.600 [2024-07-24 18:50:33.116215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.600 [2024-07-24 18:50:33.116243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.600 [2024-07-24 18:50:33.133846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.600 [2024-07-24 18:50:33.133878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.600 [2024-07-24 18:50:33.151866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.600 [2024-07-24 18:50:33.151898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.600 [2024-07-24 18:50:33.170709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.600 [2024-07-24 18:50:33.170740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.600 [2024-07-24 18:50:33.188209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.600 [2024-07-24 18:50:33.188237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.858 [2024-07-24 18:50:33.206682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.858 [2024-07-24 18:50:33.206714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.858 [2024-07-24 18:50:33.224541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.858 [2024-07-24 18:50:33.224573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.858 [2024-07-24 18:50:33.241755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.858 [2024-07-24 18:50:33.241786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.858 [2024-07-24 18:50:33.260246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.858 [2024-07-24 18:50:33.260274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.858 [2024-07-24 18:50:33.278577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.858 [2024-07-24 18:50:33.278609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.858 [2024-07-24 18:50:33.296372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.858 [2024-07-24 18:50:33.296413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.858 [2024-07-24 18:50:33.314242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.858 [2024-07-24 18:50:33.314270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.858 [2024-07-24 18:50:33.332292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.858 [2024-07-24 18:50:33.332320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.858 [2024-07-24 18:50:33.349618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.858 [2024-07-24 18:50:33.349649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.858 [2024-07-24 18:50:33.367777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.858 [2024-07-24 18:50:33.367810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.858 [2024-07-24 18:50:33.385576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.858 [2024-07-24 18:50:33.385607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.858 [2024-07-24 18:50:33.403067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.858 [2024-07-24 18:50:33.403119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.858 [2024-07-24 18:50:33.420348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.858 [2024-07-24 18:50:33.420391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.858 [2024-07-24 18:50:33.437165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.858 [2024-07-24 18:50:33.437193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.858 [2024-07-24 18:50:33.454473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.858 [2024-07-24 18:50:33.454504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.116 [2024-07-24 18:50:33.472186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.116 [2024-07-24 18:50:33.472215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.116 [2024-07-24 18:50:33.490843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.116 [2024-07-24 18:50:33.490875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.116 [2024-07-24 18:50:33.508019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.116 [2024-07-24 18:50:33.508051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.116 [2024-07-24 18:50:33.526414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.116 [2024-07-24 18:50:33.526446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.116 [2024-07-24 18:50:33.545097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.116 [2024-07-24 18:50:33.545137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.116 [2024-07-24 18:50:33.562281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.116 [2024-07-24 18:50:33.562310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.116 [2024-07-24 18:50:33.580666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.116 [2024-07-24 18:50:33.580697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.116 [2024-07-24 18:50:33.598922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.116 [2024-07-24 18:50:33.598953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.116 [2024-07-24 18:50:33.617313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.116 [2024-07-24 18:50:33.617341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.116 [2024-07-24 18:50:33.635276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.116 [2024-07-24 18:50:33.635303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.116 [2024-07-24 18:50:33.653284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.116 [2024-07-24 18:50:33.653312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.116 [2024-07-24 18:50:33.671405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.116 [2024-07-24 18:50:33.671437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.116 [2024-07-24 18:50:33.690499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.116 [2024-07-24 18:50:33.690530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.116 [2024-07-24 18:50:33.708795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.116 [2024-07-24 18:50:33.708826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.374 [2024-07-24 18:50:33.726754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.374 [2024-07-24 18:50:33.726785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.374 [2024-07-24 18:50:33.745253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.374 [2024-07-24 18:50:33.745295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.374 [2024-07-24 18:50:33.763801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.374 [2024-07-24 18:50:33.763832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.374 [2024-07-24 18:50:33.781891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.374 [2024-07-24 18:50:33.781922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.374 [2024-07-24 18:50:33.800716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.374 [2024-07-24 18:50:33.800748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.374 [2024-07-24 18:50:33.818184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.374 [2024-07-24 18:50:33.818212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.374 [2024-07-24 18:50:33.836422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.374 [2024-07-24 18:50:33.836453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.374 [2024-07-24 18:50:33.855339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.374 [2024-07-24 18:50:33.855367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.374 [2024-07-24 18:50:33.873063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.374 [2024-07-24 18:50:33.873094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.374 [2024-07-24 18:50:33.891614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.374 [2024-07-24 18:50:33.891645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.374 [2024-07-24 18:50:33.910608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.374 [2024-07-24 18:50:33.910639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.374 [2024-07-24 18:50:33.929435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.374 [2024-07-24 18:50:33.929467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.374 [2024-07-24 18:50:33.948252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.374 [2024-07-24 18:50:33.948279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.374 [2024-07-24 18:50:33.966341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.374 [2024-07-24 18:50:33.966368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.632 [2024-07-24 18:50:33.985094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.632 [2024-07-24 18:50:33.985146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.632 [2024-07-24 18:50:34.002713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.632 [2024-07-24 18:50:34.002744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.632 [2024-07-24 18:50:34.021373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.632 [2024-07-24 18:50:34.021405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.632 [2024-07-24 18:50:34.039554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.632 [2024-07-24 18:50:34.039585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.632 [2024-07-24 18:50:34.056814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.632 [2024-07-24 18:50:34.056843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.632 [2024-07-24 18:50:34.074080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.632 [2024-07-24 18:50:34.074116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.632 [2024-07-24 18:50:34.091839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.632 [2024-07-24 18:50:34.091882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.632 [2024-07-24 18:50:34.108652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.632 [2024-07-24 18:50:34.108680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.632 [2024-07-24 18:50:34.125678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.632 [2024-07-24 18:50:34.125720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.632 [2024-07-24 18:50:34.142341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.632 [2024-07-24 18:50:34.142369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.632 [2024-07-24 18:50:34.160059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.632 [2024-07-24 18:50:34.160107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.632 [2024-07-24 18:50:34.175681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.632 [2024-07-24 18:50:34.175724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.632 [2024-07-24 18:50:34.193213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.632 [2024-07-24 18:50:34.193242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.632 [2024-07-24 18:50:34.210516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.632 [2024-07-24 18:50:34.210544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.632 [2024-07-24 18:50:34.227714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.632 [2024-07-24 18:50:34.227756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.890 [2024-07-24 18:50:34.244844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.890 [2024-07-24 18:50:34.244886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.890 [2024-07-24 18:50:34.261693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.890 [2024-07-24 18:50:34.261721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.890 [2024-07-24 18:50:34.278139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.890 [2024-07-24 18:50:34.278167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.890 [2024-07-24 18:50:34.294374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.890 [2024-07-24 18:50:34.294402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.890 [2024-07-24 18:50:34.311141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.890 [2024-07-24 18:50:34.311169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.890 [2024-07-24 18:50:34.326970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.890 [2024-07-24 18:50:34.326998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.890 [2024-07-24 18:50:34.344644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.890 [2024-07-24 18:50:34.344676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.890 [2024-07-24 18:50:34.362264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.890 [2024-07-24 18:50:34.362292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.890 [2024-07-24 18:50:34.380070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.890 [2024-07-24 18:50:34.380110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.890 [2024-07-24 18:50:34.398556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.890 [2024-07-24 18:50:34.398588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.890 [2024-07-24 18:50:34.416063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.890 [2024-07-24 18:50:34.416094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.890 [2024-07-24 18:50:34.433493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.890 [2024-07-24 18:50:34.433524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.890 [2024-07-24 18:50:34.451766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.890 [2024-07-24 18:50:34.451796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.890 [2024-07-24 18:50:34.471229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.890 [2024-07-24 18:50:34.471257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.890 [2024-07-24 18:50:34.488770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.890 [2024-07-24 18:50:34.488801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.148 [2024-07-24 18:50:34.508296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.148 [2024-07-24 18:50:34.508324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.148 [2024-07-24 18:50:34.527175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.148 [2024-07-24 18:50:34.527203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.148 [2024-07-24 18:50:34.544735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.148 [2024-07-24 18:50:34.544766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.148 [2024-07-24 18:50:34.562620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.148 [2024-07-24 18:50:34.562651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.148 [2024-07-24 18:50:34.580634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.148 [2024-07-24 18:50:34.580665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.148 [2024-07-24 18:50:34.598177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.148 [2024-07-24 18:50:34.598206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.148 [2024-07-24 18:50:34.617284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.148 [2024-07-24 18:50:34.617313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.148 [2024-07-24 18:50:34.635613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.148 [2024-07-24 18:50:34.635645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.148 [2024-07-24 18:50:34.653702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.148 [2024-07-24 18:50:34.653734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.148 [2024-07-24 18:50:34.670613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.148 [2024-07-24 18:50:34.670645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.148 [2024-07-24 18:50:34.688022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.148 [2024-07-24 18:50:34.688054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.148 [2024-07-24 18:50:34.705490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.148 [2024-07-24 18:50:34.705523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.148 [2024-07-24 18:50:34.723517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.148 [2024-07-24 18:50:34.723548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.148 [2024-07-24 18:50:34.741522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.148 [2024-07-24 18:50:34.741554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.406 [2024-07-24 18:50:34.759364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.406 [2024-07-24 18:50:34.759393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.406 [2024-07-24 18:50:34.776452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.406 [2024-07-24 18:50:34.776495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.406 [2024-07-24 18:50:34.794241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.406 [2024-07-24 18:50:34.794269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.406 [2024-07-24 18:50:34.811821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.406 [2024-07-24 18:50:34.811852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.406 [2024-07-24 18:50:34.829297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.406 [2024-07-24 18:50:34.829325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.406 [2024-07-24 18:50:34.846361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.406 [2024-07-24 18:50:34.846390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.406 [2024-07-24 18:50:34.864295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.406 [2024-07-24 18:50:34.864324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.406 [2024-07-24 18:50:34.882265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.406 [2024-07-24 18:50:34.882293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.406 [2024-07-24 18:50:34.900110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.406 [2024-07-24 18:50:34.900154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.406 [2024-07-24 18:50:34.919051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.406 [2024-07-24 18:50:34.919082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.406 [2024-07-24 18:50:34.936977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.406 [2024-07-24 18:50:34.937008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.406 [2024-07-24 18:50:34.954466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.406 [2024-07-24 18:50:34.954497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.406 [2024-07-24 18:50:34.972295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.406 [2024-07-24 18:50:34.972324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.406 [2024-07-24 18:50:34.989993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.406 [2024-07-24 18:50:34.990024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.406 [2024-07-24 18:50:35.007336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.406 [2024-07-24 18:50:35.007365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.664 [2024-07-24 18:50:35.024997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.664 [2024-07-24 18:50:35.025031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.664 [2024-07-24 18:50:35.042493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.664 [2024-07-24 18:50:35.042525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.664 [2024-07-24 18:50:35.060898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.664 [2024-07-24 18:50:35.060929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.664 [2024-07-24 18:50:35.077982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.664 [2024-07-24 18:50:35.078013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.664 [2024-07-24 18:50:35.095283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.664 [2024-07-24 18:50:35.095311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.664 [2024-07-24 18:50:35.112911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.664 [2024-07-24 18:50:35.112942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.664 [2024-07-24 18:50:35.130425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.664 [2024-07-24 18:50:35.130456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.664 [2024-07-24 18:50:35.150124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.664 [2024-07-24 18:50:35.150167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.664 [2024-07-24 18:50:35.168112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.664 [2024-07-24 18:50:35.168157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.664 [2024-07-24 18:50:35.186708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.664 [2024-07-24 18:50:35.186739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.664 [2024-07-24 18:50:35.204567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.664 [2024-07-24 18:50:35.204598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.664 [2024-07-24 18:50:35.222566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.664 [2024-07-24 18:50:35.222597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.664 [2024-07-24 18:50:35.241404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.664 [2024-07-24 18:50:35.241449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.664 [2024-07-24 18:50:35.259604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.664 [2024-07-24 18:50:35.259635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.922 [2024-07-24 18:50:35.277758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.922 [2024-07-24 18:50:35.277790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.922 [2024-07-24 18:50:35.295779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.922 [2024-07-24 18:50:35.295810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.922 [2024-07-24 18:50:35.312795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.922 [2024-07-24 18:50:35.312826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.922 [2024-07-24 18:50:35.329935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.922 [2024-07-24 18:50:35.329966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.922 [2024-07-24 18:50:35.347322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.922 [2024-07-24 18:50:35.347351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.922 [2024-07-24 18:50:35.365308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.922 [2024-07-24 18:50:35.365336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.922 [2024-07-24 18:50:35.383987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.922 [2024-07-24 18:50:35.384018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.922 [2024-07-24 18:50:35.400983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.922 [2024-07-24 18:50:35.401014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.922 [2024-07-24 18:50:35.418528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.922 [2024-07-24 18:50:35.418574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.922 [2024-07-24 18:50:35.435530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.922 [2024-07-24 18:50:35.435562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.922 [2024-07-24 18:50:35.453694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.922 [2024-07-24 18:50:35.453725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.922 [2024-07-24 18:50:35.470850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.922 [2024-07-24 18:50:35.470881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.922 [2024-07-24 18:50:35.489199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.922 [2024-07-24 18:50:35.489227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.922 [2024-07-24 18:50:35.507701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.922 [2024-07-24 18:50:35.507731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.180 [2024-07-24 18:50:35.525899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.180 [2024-07-24 18:50:35.525931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.180 [2024-07-24 18:50:35.545196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.180 [2024-07-24 18:50:35.545224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.180 [2024-07-24 18:50:35.563651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.180 [2024-07-24 18:50:35.563682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.180 [2024-07-24 18:50:35.580865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.180 [2024-07-24 18:50:35.580897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.180 [2024-07-24 18:50:35.598955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.180 [2024-07-24 18:50:35.598987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.180 [2024-07-24 18:50:35.616919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.180 [2024-07-24 18:50:35.616951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.180 [2024-07-24 18:50:35.634462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.180 [2024-07-24 18:50:35.634494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.180 [2024-07-24 18:50:35.652517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.180 [2024-07-24 18:50:35.652548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.180 [2024-07-24 18:50:35.671461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.180 [2024-07-24 18:50:35.671493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.180 [2024-07-24 18:50:35.690538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.180 [2024-07-24 18:50:35.690569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.180 [2024-07-24 18:50:35.709624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.180 [2024-07-24 18:50:35.709656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.180 [2024-07-24 18:50:35.727900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.181 [2024-07-24 18:50:35.727934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.181 [2024-07-24 18:50:35.746183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.181 [2024-07-24 18:50:35.746212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.181 [2024-07-24 18:50:35.764195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.181 [2024-07-24 18:50:35.764233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.439 [2024-07-24 18:50:35.783162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.439 [2024-07-24 18:50:35.783191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.439 [2024-07-24 18:50:35.801884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.439 [2024-07-24 18:50:35.801916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.439 [2024-07-24 18:50:35.820444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.439 [2024-07-24 18:50:35.820493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.439 [2024-07-24 18:50:35.838152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.439 [2024-07-24 18:50:35.838181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.439 [2024-07-24 18:50:35.856726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.439 [2024-07-24 18:50:35.856758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.439 [2024-07-24 18:50:35.875115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.439 [2024-07-24 18:50:35.875161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.439 [2024-07-24 18:50:35.893283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.439 [2024-07-24 18:50:35.893325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.439 [2024-07-24 18:50:35.911940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.439 [2024-07-24 18:50:35.911971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.439 [2024-07-24 18:50:35.930192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.439 [2024-07-24 18:50:35.930220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.439 [2024-07-24 18:50:35.947583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.439 [2024-07-24 18:50:35.947614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.439 [2024-07-24 18:50:35.965793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.439 [2024-07-24 18:50:35.965824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.439 [2024-07-24 18:50:35.982463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.439 [2024-07-24 18:50:35.982490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.439 [2024-07-24 18:50:36.001052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.439 [2024-07-24 18:50:36.001083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.439 [2024-07-24 18:50:36.019225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.439 [2024-07-24 18:50:36.019253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.439 [2024-07-24 18:50:36.037079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.439 [2024-07-24 18:50:36.037151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.697 [2024-07-24 18:50:36.055870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.697 [2024-07-24 18:50:36.055902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.697 [2024-07-24 18:50:36.073535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.697 [2024-07-24 18:50:36.073566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.697 [2024-07-24 18:50:36.091864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.697 [2024-07-24 18:50:36.091895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.697 [2024-07-24 18:50:36.109659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.697 [2024-07-24 18:50:36.109705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.697 [2024-07-24 18:50:36.127904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.697 [2024-07-24 18:50:36.127936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.697 [2024-07-24 18:50:36.145891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.697 [2024-07-24 18:50:36.145922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.697 [2024-07-24 18:50:36.164095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.697 [2024-07-24 18:50:36.164151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.697 [2024-07-24 18:50:36.183403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.697 [2024-07-24 18:50:36.183431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.697 [2024-07-24 18:50:36.202289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.697 [2024-07-24 18:50:36.202317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.697 [2024-07-24 18:50:36.220185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.697 [2024-07-24 18:50:36.220214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.697 [2024-07-24 18:50:36.238327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.697 [2024-07-24 18:50:36.238355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.697 [2024-07-24 18:50:36.255533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.697 [2024-07-24 18:50:36.255564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.697 [2024-07-24 18:50:36.273383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.697 [2024-07-24 18:50:36.273424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.697 [2024-07-24 18:50:36.291645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.697 [2024-07-24 18:50:36.291676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.955 [2024-07-24 18:50:36.309547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.955 [2024-07-24 18:50:36.309578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.955 [2024-07-24 18:50:36.328338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.955 [2024-07-24 18:50:36.328367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.955 [2024-07-24 18:50:36.346898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.955 [2024-07-24 18:50:36.346928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.955 [2024-07-24 18:50:36.365022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.955 [2024-07-24 18:50:36.365052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.955 [2024-07-24 18:50:36.383262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.955 [2024-07-24 18:50:36.383290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.955 [2024-07-24 18:50:36.402179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.955 [2024-07-24 18:50:36.402207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.955 [2024-07-24 18:50:36.420084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.955 [2024-07-24 18:50:36.420126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.955 [2024-07-24 18:50:36.437421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.955 [2024-07-24 18:50:36.437452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.955 [2024-07-24 18:50:36.454305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.955 [2024-07-24 18:50:36.454346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.955 [2024-07-24 18:50:36.471462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.955 [2024-07-24 18:50:36.471494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.955 [2024-07-24 18:50:36.488932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.955 [2024-07-24 18:50:36.488963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.955 [2024-07-24 18:50:36.506068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.955 [2024-07-24 18:50:36.506099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.955 [2024-07-24 18:50:36.524498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.955 [2024-07-24 18:50:36.524530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.955 [2024-07-24 18:50:36.541637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.955 [2024-07-24 18:50:36.541668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.213 [2024-07-24 18:50:36.560364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.213 [2024-07-24 18:50:36.560391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.213 [2024-07-24 18:50:36.578857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.213 [2024-07-24 18:50:36.578888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.213 [2024-07-24 18:50:36.595960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.213 [2024-07-24 18:50:36.595991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.213 [2024-07-24 18:50:36.614084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.213 [2024-07-24 18:50:36.614136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.213 [2024-07-24 18:50:36.631257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.213 [2024-07-24 18:50:36.631286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.213 [2024-07-24 18:50:36.648342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.213 [2024-07-24 18:50:36.648370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.213 [2024-07-24 18:50:36.666175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.213 [2024-07-24 18:50:36.666204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.213 [2024-07-24 18:50:36.683344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.213 [2024-07-24 18:50:36.683398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.213 [2024-07-24 18:50:36.702729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.213 [2024-07-24 18:50:36.702760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.213 [2024-07-24 18:50:36.720964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.213 [2024-07-24 18:50:36.720994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.213 [2024-07-24 18:50:36.737307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.213 [2024-07-24 18:50:36.737335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.213 [2024-07-24 18:50:36.753957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.213 [2024-07-24 18:50:36.753998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.213 [2024-07-24 18:50:36.770618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.213 [2024-07-24 18:50:36.770661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.213 [2024-07-24 18:50:36.787474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.213 [2024-07-24 18:50:36.787517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.213 [2024-07-24 18:50:36.804156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.213 [2024-07-24 18:50:36.804184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.472 [2024-07-24 18:50:36.821599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.472 [2024-07-24 18:50:36.821627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.472 [2024-07-24 18:50:36.837830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.472 [2024-07-24 18:50:36.837871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.472 [2024-07-24 18:50:36.854802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.472 [2024-07-24 18:50:36.854830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.472 [2024-07-24 18:50:36.869468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.472 [2024-07-24 18:50:36.869497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.472 [2024-07-24 18:50:36.886402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.472 [2024-07-24 18:50:36.886430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.472 [2024-07-24 18:50:36.903119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.472 [2024-07-24 18:50:36.903148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.472 [2024-07-24 18:50:36.920760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.472 [2024-07-24 18:50:36.920804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.472 [2024-07-24 18:50:36.937122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.472 [2024-07-24 18:50:36.937150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.472 [2024-07-24 18:50:36.953639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.472 [2024-07-24 18:50:36.953671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.472 [2024-07-24 18:50:36.972157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.472 [2024-07-24 18:50:36.972185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.472 [2024-07-24 18:50:36.990444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.472 [2024-07-24 18:50:36.990472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.472 [2024-07-24 18:50:37.007806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.472 [2024-07-24 18:50:37.007838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.472 [2024-07-24 18:50:37.026121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.472 [2024-07-24 18:50:37.026165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.472 [2024-07-24 18:50:37.044164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.472 [2024-07-24 18:50:37.044193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.472 [2024-07-24 18:50:37.060858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.472 [2024-07-24 18:50:37.060890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.730 [2024-07-24 18:50:37.078391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.730 [2024-07-24 18:50:37.078420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.730 [2024-07-24 18:50:37.096817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.730 [2024-07-24 18:50:37.096848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.730 [2024-07-24 18:50:37.114681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.730 [2024-07-24 18:50:37.114713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.730 [2024-07-24 18:50:37.133069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.730 [2024-07-24 18:50:37.133100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.730 [2024-07-24 18:50:37.151316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.730 [2024-07-24 18:50:37.151344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.730 [2024-07-24 18:50:37.168052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.730 [2024-07-24 18:50:37.168084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.730 [2024-07-24 18:50:37.185770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.730 [2024-07-24 18:50:37.185802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.730 [2024-07-24 18:50:37.203065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.730 [2024-07-24 18:50:37.203097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.730 [2024-07-24 18:50:37.221890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.730 [2024-07-24 18:50:37.221921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.730 [2024-07-24 18:50:37.241007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.730 [2024-07-24 18:50:37.241038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.730 [2024-07-24 18:50:37.260057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.730 [2024-07-24 18:50:37.260088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.730 [2024-07-24 18:50:37.278560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.730 [2024-07-24 18:50:37.278592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.730 [2024-07-24 18:50:37.296879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.730 [2024-07-24 18:50:37.296910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.730 [2024-07-24 18:50:37.314669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.730 [2024-07-24 18:50:37.314701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.988 [2024-07-24 18:50:37.333089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.988 [2024-07-24 18:50:37.333133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.988 [2024-07-24 18:50:37.350860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.988 [2024-07-24 18:50:37.350892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.988 [2024-07-24 18:50:37.368540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.988 [2024-07-24 18:50:37.368571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.988 [2024-07-24 18:50:37.386623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.988 [2024-07-24 18:50:37.386654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.988 [2024-07-24 18:50:37.403307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.988 [2024-07-24 18:50:37.403335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.988 [2024-07-24 18:50:37.421349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.988 [2024-07-24 18:50:37.421377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.988 [2024-07-24 18:50:37.439611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.988 [2024-07-24 18:50:37.439643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.988 [2024-07-24 18:50:37.457282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.988 [2024-07-24 18:50:37.457310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.988 [2024-07-24 18:50:37.475061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.988 [2024-07-24 18:50:37.475092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.988 [2024-07-24 18:50:37.493270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.988 [2024-07-24 18:50:37.493299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.988 [2024-07-24 18:50:37.512210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.988 [2024-07-24 18:50:37.512238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.988 [2024-07-24 18:50:37.530524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.988 [2024-07-24 18:50:37.530555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.988 [2024-07-24 18:50:37.543810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.988 [2024-07-24 18:50:37.543841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.988 00:07:59.988 Latency(us) 00:07:59.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.988 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:07:59.988 Nvme1n1 : 5.02 7126.90 55.68 0.00 0.00 17923.84 6796.33 27379.48 00:07:59.988 =================================================================================================================== 00:07:59.988 Total : 7126.90 55.68 0.00 0.00 17923.84 6796.33 27379.48 00:07:59.988 [2024-07-24 18:50:37.550622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.988 [2024-07-24 18:50:37.550650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.988 [2024-07-24 18:50:37.558642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.988 [2024-07-24 18:50:37.558671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.988 [2024-07-24 18:50:37.566644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.988 [2024-07-24 18:50:37.566666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.988 [2024-07-24 18:50:37.574721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.988 [2024-07-24 18:50:37.574766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:59.988 [2024-07-24 18:50:37.582737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:59.988 [2024-07-24 18:50:37.582783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.246 [2024-07-24 18:50:37.590762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.246 [2024-07-24 18:50:37.590805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.246 [2024-07-24 18:50:37.598783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.246 [2024-07-24 18:50:37.598827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.246 [2024-07-24 18:50:37.606801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.246 [2024-07-24 18:50:37.606844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.246 [2024-07-24 18:50:37.614834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.246 [2024-07-24 18:50:37.614879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.246 [2024-07-24 18:50:37.622845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.246 [2024-07-24 18:50:37.622909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.246 [2024-07-24 18:50:37.630873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.630917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.638894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.638937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.646918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.646961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.654936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.654980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.662954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.662997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.670974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.671017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.678997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.679040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.687020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.687061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.695006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.695032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.703026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.703052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.711043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.711068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.719068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.719095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.727089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.727121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.735160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.735202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.743176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.743219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.751176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.751199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.759191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.759213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.767210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.767232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.775227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.775262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.783247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.783274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.791307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.791350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.799326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.799367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.807304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.807326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.815322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.815343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 [2024-07-24 18:50:37.823344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:00.247 [2024-07-24 18:50:37.823365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3072998) - No such process 00:08:00.247 18:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3072998 00:08:00.247 18:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.247 18:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.247 18:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:00.247 18:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.247 18:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:00.247 18:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.247 18:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:00.247 delay0 00:08:00.247 18:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.247 18:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:00.247 18:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.247 18:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:00.505 18:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.505 18:50:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:00.505 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.505 [2024-07-24 18:50:37.986269] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:07.092 Initializing NVMe Controllers 00:08:07.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:07.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:07.092 Initialization complete. Launching workers. 00:08:07.092 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 790 00:08:07.092 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1077, failed to submit 33 00:08:07.092 success 880, unsuccess 197, failed 0 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:07.092 rmmod nvme_tcp 00:08:07.092 rmmod nvme_fabrics 00:08:07.092 rmmod nvme_keyring 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3071655 ']' 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3071655 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3071655 ']' 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3071655 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3071655 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3071655' 00:08:07.092 killing process with pid 3071655 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3071655 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3071655 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.092 18:50:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.628 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:09.628 00:08:09.628 real 0m27.911s 00:08:09.628 user 0m38.945s 00:08:09.628 sys 0m8.764s 00:08:09.628 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.628 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.628 ************************************ 00:08:09.628 END TEST nvmf_zcopy 00:08:09.628 ************************************ 00:08:09.628 18:50:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:09.628 18:50:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:09.628 18:50:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.628 18:50:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:09.628 ************************************ 00:08:09.628 START TEST nvmf_nmic 00:08:09.628 ************************************ 00:08:09.628 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:09.628 * Looking for test storage... 00:08:09.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:08:09.629 18:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:11.005 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:11.006 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:11.006 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:11.006 Found net devices under 0000:09:00.0: cvl_0_0 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:11.006 Found net devices under 0000:09:00.1: cvl_0_1 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.006 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:11.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:08:11.264 00:08:11.264 --- 10.0.0.2 ping statistics --- 00:08:11.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.264 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:11.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:08:11.264 00:08:11.264 --- 10.0.0.1 ping statistics --- 00:08:11.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.264 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3076385 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3076385 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3076385 ']' 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:11.264 18:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:11.264 [2024-07-24 18:50:48.809191] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:08:11.264 [2024-07-24 18:50:48.809285] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.264 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.522 [2024-07-24 18:50:48.878373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.522 [2024-07-24 18:50:48.997089] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.522 [2024-07-24 18:50:48.997173] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.522 [2024-07-24 18:50:48.997204] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:11.522 [2024-07-24 18:50:48.997225] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:11.522 [2024-07-24 18:50:48.997235] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.522 [2024-07-24 18:50:48.997288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.522 [2024-07-24 18:50:48.997314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.522 [2024-07-24 18:50:48.997362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.522 [2024-07-24 18:50:48.997364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:12.455 [2024-07-24 18:50:49.798870] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:12.455 Malloc0 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.455 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:12.456 [2024-07-24 18:50:49.850219] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:12.456 test case1: single bdev can't be used in multiple subsystems 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:12.456 [2024-07-24 18:50:49.874098] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:12.456 [2024-07-24 18:50:49.874135] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:12.456 [2024-07-24 18:50:49.874160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.456 request: 00:08:12.456 { 00:08:12.456 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:12.456 "namespace": { 00:08:12.456 "bdev_name": "Malloc0", 00:08:12.456 "no_auto_visible": false 00:08:12.456 }, 00:08:12.456 "method": "nvmf_subsystem_add_ns", 00:08:12.456 "req_id": 1 00:08:12.456 } 00:08:12.456 Got JSON-RPC error response 00:08:12.456 response: 00:08:12.456 { 00:08:12.456 "code": -32602, 00:08:12.456 "message": "Invalid parameters" 00:08:12.456 } 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:12.456 Adding namespace failed - expected result. 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:12.456 test case2: host connect to nvmf target in multiple paths 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:12.456 [2024-07-24 18:50:49.882235] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.456 18:50:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:13.020 18:50:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:13.952 18:50:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:13.952 18:50:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:08:13.952 18:50:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:13.952 18:50:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:13.952 18:50:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:08:15.846 18:50:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:15.846 18:50:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:15.846 18:50:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:15.846 18:50:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:15.846 18:50:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:15.846 18:50:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:08:15.846 18:50:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:15.846 [global] 00:08:15.846 thread=1 00:08:15.846 invalidate=1 00:08:15.846 rw=write 00:08:15.846 time_based=1 00:08:15.846 runtime=1 00:08:15.846 ioengine=libaio 00:08:15.846 direct=1 00:08:15.846 bs=4096 00:08:15.846 iodepth=1 00:08:15.846 norandommap=0 00:08:15.846 numjobs=1 00:08:15.846 00:08:15.846 verify_dump=1 00:08:15.846 verify_backlog=512 00:08:15.846 verify_state_save=0 00:08:15.846 do_verify=1 00:08:15.846 verify=crc32c-intel 00:08:15.846 [job0] 00:08:15.846 filename=/dev/nvme0n1 00:08:15.846 Could not set queue depth (nvme0n1) 00:08:16.103 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:16.103 fio-3.35 00:08:16.103 Starting 1 thread 00:08:17.474 00:08:17.474 job0: (groupid=0, jobs=1): err= 0: pid=3077034: Wed Jul 24 18:50:54 2024 00:08:17.474 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:08:17.474 slat (nsec): min=15088, max=34007, avg=25792.09, stdev=8335.29 00:08:17.474 clat (usec): min=449, max=42007, avg=39234.29, stdev=8671.09 00:08:17.474 lat (usec): min=467, max=42040, avg=39260.08, stdev=8672.80 00:08:17.474 clat percentiles (usec): 00:08:17.474 | 1.00th=[ 449], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:08:17.474 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:17.474 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:08:17.474 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:17.474 | 99.99th=[42206] 00:08:17.474 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:08:17.474 slat (usec): min=5, max=28884, avg=67.75, stdev=1276.04 00:08:17.474 clat (usec): min=170, max=387, avg=229.64, stdev=28.56 00:08:17.474 lat (usec): min=176, max=29128, avg=297.38, stdev=1276.93 00:08:17.474 clat percentiles (usec): 00:08:17.474 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 196], 00:08:17.474 | 30.00th=[ 210], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 245], 00:08:17.474 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 265], 00:08:17.474 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 388], 99.95th=[ 388], 00:08:17.474 | 99.99th=[ 388] 00:08:17.474 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:08:17.474 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:17.474 lat (usec) : 250=82.02%, 500=14.04% 00:08:17.474 lat (msec) : 50=3.93% 00:08:17.474 cpu : usr=0.29%, sys=0.59%, ctx=536, majf=0, minf=2 00:08:17.474 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:17.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:17.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:17.474 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:17.474 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:17.474 00:08:17.474 Run status group 0 (all jobs): 00:08:17.474 READ: bw=86.4KiB/s (88.5kB/s), 86.4KiB/s-86.4KiB/s (88.5kB/s-88.5kB/s), io=88.0KiB (90.1kB), run=1018-1018msec 00:08:17.474 WRITE: bw=2012KiB/s (2060kB/s), 2012KiB/s-2012KiB/s (2060kB/s-2060kB/s), io=2048KiB (2097kB), run=1018-1018msec 00:08:17.474 00:08:17.474 Disk stats (read/write): 00:08:17.474 nvme0n1: ios=44/512, merge=0/0, ticks=1692/115, in_queue=1807, util=98.80% 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:17.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:17.474 rmmod nvme_tcp 00:08:17.474 rmmod nvme_fabrics 00:08:17.474 rmmod nvme_keyring 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3076385 ']' 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3076385 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3076385 ']' 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3076385 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3076385 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3076385' 00:08:17.474 killing process with pid 3076385 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3076385 00:08:17.474 18:50:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3076385 00:08:17.734 18:50:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:17.734 18:50:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:17.734 18:50:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:17.734 18:50:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:17.734 18:50:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:17.734 18:50:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.734 18:50:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.734 18:50:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.640 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:19.640 00:08:19.640 real 0m10.490s 00:08:19.640 user 0m25.198s 00:08:19.640 sys 0m2.268s 00:08:19.640 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.640 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:19.640 ************************************ 00:08:19.640 END TEST nvmf_nmic 00:08:19.640 ************************************ 00:08:19.640 18:50:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:19.640 18:50:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:19.640 18:50:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.640 18:50:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:19.640 ************************************ 00:08:19.640 START TEST nvmf_fio_target 00:08:19.640 ************************************ 00:08:19.640 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:19.899 * Looking for test storage... 00:08:19.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:19.899 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:19.900 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:19.900 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:19.900 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:19.900 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.900 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:19.900 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:19.900 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:19.900 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.900 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.900 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.900 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:19.900 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:19.900 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:08:19.900 18:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:21.799 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.799 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:08:21.799 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:21.799 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:21.799 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:21.799 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:21.799 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:21.799 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:08:21.799 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:21.799 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:08:21.799 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:08:21.799 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:21.800 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:21.800 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:21.800 Found net devices under 0000:09:00.0: cvl_0_0 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:21.800 Found net devices under 0000:09:00.1: cvl_0_1 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:21.800 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:22.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:08:22.059 00:08:22.059 --- 10.0.0.2 ping statistics --- 00:08:22.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.059 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:22.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:08:22.059 00:08:22.059 --- 10.0.0.1 ping statistics --- 00:08:22.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.059 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3079108 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3079108 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3079108 ']' 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.059 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:22.059 [2024-07-24 18:50:59.527687] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:08:22.059 [2024-07-24 18:50:59.527776] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.059 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.059 [2024-07-24 18:50:59.606315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:22.317 [2024-07-24 18:50:59.733027] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.317 [2024-07-24 18:50:59.733085] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.317 [2024-07-24 18:50:59.733111] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.317 [2024-07-24 18:50:59.733127] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.317 [2024-07-24 18:50:59.733139] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.317 [2024-07-24 18:50:59.733203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.317 [2024-07-24 18:50:59.733237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.317 [2024-07-24 18:50:59.733291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.317 [2024-07-24 18:50:59.733295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.317 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:22.317 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:08:22.317 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:22.317 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:22.317 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:22.317 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.317 18:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:22.575 [2024-07-24 18:51:00.131612] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.575 18:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:22.832 18:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:22.832 18:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:23.398 18:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:23.398 18:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:23.398 18:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:23.398 18:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:23.961 18:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:23.961 18:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:23.961 18:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:24.219 18:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:24.219 18:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:24.475 18:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:24.475 18:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:25.038 18:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:25.038 18:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:25.038 18:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:25.312 18:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:25.312 18:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:25.570 18:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:25.570 18:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:25.828 18:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.085 [2024-07-24 18:51:03.588070] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.085 18:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:26.342 18:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:26.599 18:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:27.562 18:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:27.562 18:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:08:27.562 18:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:27.562 18:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:08:27.562 18:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:08:27.562 18:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:08:29.457 18:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:29.457 18:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:29.457 18:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:29.457 18:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:08:29.457 18:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:29.457 18:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:08:29.457 18:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:29.457 [global] 00:08:29.457 thread=1 00:08:29.457 invalidate=1 00:08:29.457 rw=write 00:08:29.457 time_based=1 00:08:29.457 runtime=1 00:08:29.457 ioengine=libaio 00:08:29.457 direct=1 00:08:29.457 bs=4096 00:08:29.457 iodepth=1 00:08:29.457 norandommap=0 00:08:29.457 numjobs=1 00:08:29.457 00:08:29.457 verify_dump=1 00:08:29.457 verify_backlog=512 00:08:29.457 verify_state_save=0 00:08:29.457 do_verify=1 00:08:29.457 verify=crc32c-intel 00:08:29.457 [job0] 00:08:29.457 filename=/dev/nvme0n1 00:08:29.457 [job1] 00:08:29.457 filename=/dev/nvme0n2 00:08:29.457 [job2] 00:08:29.457 filename=/dev/nvme0n3 00:08:29.457 [job3] 00:08:29.457 filename=/dev/nvme0n4 00:08:29.457 Could not set queue depth (nvme0n1) 00:08:29.457 Could not set queue depth (nvme0n2) 00:08:29.457 Could not set queue depth (nvme0n3) 00:08:29.457 Could not set queue depth (nvme0n4) 00:08:29.457 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:29.457 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:29.457 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:29.457 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:29.457 fio-3.35 00:08:29.457 Starting 4 threads 00:08:30.830 00:08:30.830 job0: (groupid=0, jobs=1): err= 0: pid=3080290: Wed Jul 24 18:51:08 2024 00:08:30.830 read: IOPS=21, BW=86.6KiB/s (88.7kB/s)(88.0KiB/1016msec) 00:08:30.830 slat (nsec): min=8520, max=14617, avg=13798.59, stdev=1202.64 00:08:30.830 clat (usec): min=40565, max=41092, avg=40963.96, stdev=100.59 00:08:30.830 lat (usec): min=40574, max=41107, avg=40977.76, stdev=101.69 00:08:30.830 clat percentiles (usec): 00:08:30.830 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:08:30.830 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:30.830 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:30.830 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:30.830 | 99.99th=[41157] 00:08:30.830 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:08:30.830 slat (nsec): min=8209, max=39258, avg=9889.62, stdev=2167.35 00:08:30.830 clat (usec): min=174, max=339, avg=209.38, stdev=21.77 00:08:30.830 lat (usec): min=184, max=348, avg=219.26, stdev=22.19 00:08:30.830 clat percentiles (usec): 00:08:30.830 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 194], 00:08:30.830 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 212], 00:08:30.830 | 70.00th=[ 219], 80.00th=[ 223], 90.00th=[ 231], 95.00th=[ 241], 00:08:30.830 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 338], 99.95th=[ 338], 00:08:30.830 | 99.99th=[ 338] 00:08:30.830 bw ( KiB/s): min= 4096, max= 4096, per=50.95%, avg=4096.00, stdev= 0.00, samples=1 00:08:30.830 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:30.830 lat (usec) : 250=92.88%, 500=3.00% 00:08:30.830 lat (msec) : 50=4.12% 00:08:30.830 cpu : usr=0.39%, sys=0.59%, ctx=535, majf=0, minf=1 00:08:30.830 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:30.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:30.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:30.830 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:30.830 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:30.830 job1: (groupid=0, jobs=1): err= 0: pid=3080291: Wed Jul 24 18:51:08 2024 00:08:30.830 read: IOPS=40, BW=162KiB/s (165kB/s)(164KiB/1015msec) 00:08:30.830 slat (nsec): min=5708, max=36222, avg=14195.10, stdev=8012.74 00:08:30.830 clat (usec): min=390, max=42232, avg=20474.94, stdev=20736.51 00:08:30.830 lat (usec): min=398, max=42255, avg=20489.14, stdev=20740.42 00:08:30.830 clat percentiles (usec): 00:08:30.830 | 1.00th=[ 392], 5.00th=[ 424], 10.00th=[ 445], 20.00th=[ 474], 00:08:30.830 | 30.00th=[ 506], 40.00th=[ 529], 50.00th=[ 619], 60.00th=[41157], 00:08:30.830 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:08:30.830 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:30.830 | 99.99th=[42206] 00:08:30.830 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:08:30.830 slat (nsec): min=7688, max=49437, avg=15746.66, stdev=7160.94 00:08:30.830 clat (usec): min=195, max=811, avg=320.06, stdev=75.71 00:08:30.830 lat (usec): min=208, max=822, avg=335.81, stdev=76.05 00:08:30.830 clat percentiles (usec): 00:08:30.830 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 243], 20.00th=[ 255], 00:08:30.830 | 30.00th=[ 269], 40.00th=[ 285], 50.00th=[ 302], 60.00th=[ 330], 00:08:30.830 | 70.00th=[ 351], 80.00th=[ 379], 90.00th=[ 420], 95.00th=[ 457], 00:08:30.830 | 99.00th=[ 562], 99.50th=[ 603], 99.90th=[ 816], 99.95th=[ 816], 00:08:30.830 | 99.99th=[ 816] 00:08:30.830 bw ( KiB/s): min= 4096, max= 4096, per=50.95%, avg=4096.00, stdev= 0.00, samples=1 00:08:30.830 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:30.830 lat (usec) : 250=13.20%, 500=79.75%, 750=3.25%, 1000=0.18% 00:08:30.830 lat (msec) : 50=3.62% 00:08:30.830 cpu : usr=0.69%, sys=0.89%, ctx=554, majf=0, minf=1 00:08:30.830 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:30.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:30.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:30.830 issued rwts: total=41,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:30.830 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:30.830 job2: (groupid=0, jobs=1): err= 0: pid=3080292: Wed Jul 24 18:51:08 2024 00:08:30.830 read: IOPS=29, BW=119KiB/s (122kB/s)(120KiB/1006msec) 00:08:30.830 slat (nsec): min=5792, max=34360, avg=17833.63, stdev=9209.16 00:08:30.830 clat (usec): min=444, max=41377, avg=27488.27, stdev=19402.26 00:08:30.830 lat (usec): min=451, max=41410, avg=27506.11, stdev=19406.35 00:08:30.830 clat percentiles (usec): 00:08:30.830 | 1.00th=[ 445], 5.00th=[ 490], 10.00th=[ 494], 20.00th=[ 510], 00:08:30.830 | 30.00th=[ 562], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:08:30.830 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:30.830 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:30.830 | 99.99th=[41157] 00:08:30.830 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:08:30.830 slat (usec): min=7, max=18456, avg=51.13, stdev=815.02 00:08:30.830 clat (usec): min=178, max=795, avg=296.55, stdev=68.62 00:08:30.830 lat (usec): min=187, max=18902, avg=347.68, stdev=824.53 00:08:30.830 clat percentiles (usec): 00:08:30.830 | 1.00th=[ 186], 5.00th=[ 200], 10.00th=[ 210], 20.00th=[ 231], 00:08:30.830 | 30.00th=[ 247], 40.00th=[ 277], 50.00th=[ 293], 60.00th=[ 326], 00:08:30.830 | 70.00th=[ 338], 80.00th=[ 355], 90.00th=[ 375], 95.00th=[ 396], 00:08:30.830 | 99.00th=[ 445], 99.50th=[ 478], 99.90th=[ 799], 99.95th=[ 799], 00:08:30.830 | 99.99th=[ 799] 00:08:30.830 bw ( KiB/s): min= 4096, max= 4096, per=50.95%, avg=4096.00, stdev= 0.00, samples=1 00:08:30.830 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:30.830 lat (usec) : 250=28.97%, 500=66.05%, 750=1.11%, 1000=0.18% 00:08:30.830 lat (msec) : 50=3.69% 00:08:30.830 cpu : usr=0.70%, sys=0.50%, ctx=544, majf=0, minf=1 00:08:30.830 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:30.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:30.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:30.831 issued rwts: total=30,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:30.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:30.831 job3: (groupid=0, jobs=1): err= 0: pid=3080293: Wed Jul 24 18:51:08 2024 00:08:30.831 read: IOPS=19, BW=78.5KiB/s (80.4kB/s)(80.0KiB/1019msec) 00:08:30.831 slat (nsec): min=7322, max=33496, avg=18113.10, stdev=6480.00 00:08:30.831 clat (usec): min=40911, max=42084, avg=41781.86, stdev=414.16 00:08:30.831 lat (usec): min=40928, max=42100, avg=41799.97, stdev=415.76 00:08:30.831 clat percentiles (usec): 00:08:30.831 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:08:30.831 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:08:30.831 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:08:30.831 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:30.831 | 99.99th=[42206] 00:08:30.831 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:08:30.831 slat (nsec): min=6402, max=63276, avg=16660.74, stdev=8211.38 00:08:30.831 clat (usec): min=216, max=744, avg=335.37, stdev=57.49 00:08:30.831 lat (usec): min=224, max=761, avg=352.03, stdev=59.15 00:08:30.831 clat percentiles (usec): 00:08:30.831 | 1.00th=[ 237], 5.00th=[ 255], 10.00th=[ 269], 20.00th=[ 285], 00:08:30.831 | 30.00th=[ 306], 40.00th=[ 322], 50.00th=[ 334], 60.00th=[ 343], 00:08:30.831 | 70.00th=[ 363], 80.00th=[ 379], 90.00th=[ 404], 95.00th=[ 433], 00:08:30.831 | 99.00th=[ 502], 99.50th=[ 529], 99.90th=[ 742], 99.95th=[ 742], 00:08:30.831 | 99.99th=[ 742] 00:08:30.831 bw ( KiB/s): min= 4096, max= 4096, per=50.95%, avg=4096.00, stdev= 0.00, samples=1 00:08:30.831 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:30.831 lat (usec) : 250=3.57%, 500=91.54%, 750=1.13% 00:08:30.831 lat (msec) : 50=3.76% 00:08:30.831 cpu : usr=0.59%, sys=0.69%, ctx=532, majf=0, minf=2 00:08:30.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:30.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:30.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:30.831 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:30.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:30.831 00:08:30.831 Run status group 0 (all jobs): 00:08:30.831 READ: bw=444KiB/s (454kB/s), 78.5KiB/s-162KiB/s (80.4kB/s-165kB/s), io=452KiB (463kB), run=1006-1019msec 00:08:30.831 WRITE: bw=8039KiB/s (8232kB/s), 2010KiB/s-2036KiB/s (2058kB/s-2085kB/s), io=8192KiB (8389kB), run=1006-1019msec 00:08:30.831 00:08:30.831 Disk stats (read/write): 00:08:30.831 nvme0n1: ios=69/512, merge=0/0, ticks=1152/104, in_queue=1256, util=98.20% 00:08:30.831 nvme0n2: ios=61/512, merge=0/0, ticks=1660/153, in_queue=1813, util=97.76% 00:08:30.831 nvme0n3: ios=50/512, merge=0/0, ticks=1650/149, in_queue=1799, util=97.90% 00:08:30.831 nvme0n4: ios=15/512, merge=0/0, ticks=628/170, in_queue=798, util=89.64% 00:08:30.831 18:51:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:30.831 [global] 00:08:30.831 thread=1 00:08:30.831 invalidate=1 00:08:30.831 rw=randwrite 00:08:30.831 time_based=1 00:08:30.831 runtime=1 00:08:30.831 ioengine=libaio 00:08:30.831 direct=1 00:08:30.831 bs=4096 00:08:30.831 iodepth=1 00:08:30.831 norandommap=0 00:08:30.831 numjobs=1 00:08:30.831 00:08:30.831 verify_dump=1 00:08:30.831 verify_backlog=512 00:08:30.831 verify_state_save=0 00:08:30.831 do_verify=1 00:08:30.831 verify=crc32c-intel 00:08:30.831 [job0] 00:08:30.831 filename=/dev/nvme0n1 00:08:30.831 [job1] 00:08:30.831 filename=/dev/nvme0n2 00:08:30.831 [job2] 00:08:30.831 filename=/dev/nvme0n3 00:08:30.831 [job3] 00:08:30.831 filename=/dev/nvme0n4 00:08:30.831 Could not set queue depth (nvme0n1) 00:08:30.831 Could not set queue depth (nvme0n2) 00:08:30.831 Could not set queue depth (nvme0n3) 00:08:30.831 Could not set queue depth (nvme0n4) 00:08:31.090 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:31.090 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:31.090 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:31.090 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:31.090 fio-3.35 00:08:31.090 Starting 4 threads 00:08:32.460 00:08:32.460 job0: (groupid=0, jobs=1): err= 0: pid=3080919: Wed Jul 24 18:51:09 2024 00:08:32.460 read: IOPS=514, BW=2059KiB/s (2108kB/s)(2104KiB/1022msec) 00:08:32.460 slat (nsec): min=5507, max=37167, avg=7081.10, stdev=2910.36 00:08:32.460 clat (usec): min=311, max=42065, avg=1442.68, stdev=6566.67 00:08:32.460 lat (usec): min=317, max=42078, avg=1449.77, stdev=6568.14 00:08:32.460 clat percentiles (usec): 00:08:32.460 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 330], 00:08:32.460 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 351], 00:08:32.460 | 70.00th=[ 363], 80.00th=[ 379], 90.00th=[ 441], 95.00th=[ 482], 00:08:32.460 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:08:32.460 | 99.99th=[42206] 00:08:32.460 write: IOPS=1001, BW=4008KiB/s (4104kB/s)(4096KiB/1022msec); 0 zone resets 00:08:32.460 slat (nsec): min=7202, max=36786, avg=10350.66, stdev=3541.76 00:08:32.460 clat (usec): min=175, max=487, avg=238.56, stdev=45.22 00:08:32.460 lat (usec): min=184, max=496, avg=248.91, stdev=45.65 00:08:32.460 clat percentiles (usec): 00:08:32.460 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 202], 00:08:32.460 | 30.00th=[ 212], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 241], 00:08:32.460 | 70.00th=[ 247], 80.00th=[ 260], 90.00th=[ 293], 95.00th=[ 343], 00:08:32.460 | 99.00th=[ 388], 99.50th=[ 424], 99.90th=[ 465], 99.95th=[ 490], 00:08:32.460 | 99.99th=[ 490] 00:08:32.460 bw ( KiB/s): min= 8192, max= 8192, per=45.87%, avg=8192.00, stdev= 0.00, samples=1 00:08:32.460 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:32.460 lat (usec) : 250=47.68%, 500=51.42% 00:08:32.460 lat (msec) : 50=0.90% 00:08:32.460 cpu : usr=1.08%, sys=1.76%, ctx=1553, majf=0, minf=1 00:08:32.460 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:32.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.460 issued rwts: total=526,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.460 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:32.461 job1: (groupid=0, jobs=1): err= 0: pid=3080929: Wed Jul 24 18:51:09 2024 00:08:32.461 read: IOPS=503, BW=2016KiB/s (2064kB/s)(2080KiB/1032msec) 00:08:32.461 slat (nsec): min=5015, max=40291, avg=10284.37, stdev=5221.39 00:08:32.461 clat (usec): min=297, max=42020, avg=1453.00, stdev=6631.72 00:08:32.461 lat (usec): min=305, max=42035, avg=1463.28, stdev=6632.55 00:08:32.461 clat percentiles (usec): 00:08:32.461 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 322], 00:08:32.461 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 351], 00:08:32.461 | 70.00th=[ 363], 80.00th=[ 379], 90.00th=[ 404], 95.00th=[ 445], 00:08:32.461 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:32.461 | 99.99th=[42206] 00:08:32.461 write: IOPS=992, BW=3969KiB/s (4064kB/s)(4096KiB/1032msec); 0 zone resets 00:08:32.461 slat (nsec): min=6306, max=56147, avg=9985.74, stdev=4836.91 00:08:32.461 clat (usec): min=183, max=613, avg=249.89, stdev=50.10 00:08:32.461 lat (usec): min=190, max=629, avg=259.88, stdev=51.69 00:08:32.461 clat percentiles (usec): 00:08:32.461 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 219], 00:08:32.461 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 245], 00:08:32.461 | 70.00th=[ 253], 80.00th=[ 269], 90.00th=[ 306], 95.00th=[ 338], 00:08:32.461 | 99.00th=[ 461], 99.50th=[ 486], 99.90th=[ 611], 99.95th=[ 611], 00:08:32.461 | 99.99th=[ 611] 00:08:32.461 bw ( KiB/s): min= 4096, max= 4096, per=22.93%, avg=4096.00, stdev= 0.00, samples=2 00:08:32.461 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:08:32.461 lat (usec) : 250=43.78%, 500=54.73%, 750=0.58% 00:08:32.461 lat (msec) : 50=0.91% 00:08:32.461 cpu : usr=0.78%, sys=1.45%, ctx=1545, majf=0, minf=1 00:08:32.461 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:32.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.461 issued rwts: total=520,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.461 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:32.461 job2: (groupid=0, jobs=1): err= 0: pid=3080930: Wed Jul 24 18:51:09 2024 00:08:32.461 read: IOPS=1035, BW=4144KiB/s (4243kB/s)(4148KiB/1001msec) 00:08:32.461 slat (nsec): min=4744, max=71754, avg=14209.25, stdev=8689.73 00:08:32.461 clat (usec): min=258, max=41044, avg=589.13, stdev=3081.41 00:08:32.461 lat (usec): min=267, max=41052, avg=603.34, stdev=3081.43 00:08:32.461 clat percentiles (usec): 00:08:32.461 | 1.00th=[ 269], 5.00th=[ 285], 10.00th=[ 297], 20.00th=[ 314], 00:08:32.461 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 355], 00:08:32.461 | 70.00th=[ 367], 80.00th=[ 388], 90.00th=[ 429], 95.00th=[ 482], 00:08:32.461 | 99.00th=[ 537], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:08:32.461 | 99.99th=[41157] 00:08:32.461 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:08:32.461 slat (nsec): min=6188, max=42126, avg=10752.92, stdev=4463.94 00:08:32.461 clat (usec): min=169, max=1648, avg=227.27, stdev=74.76 00:08:32.461 lat (usec): min=175, max=1680, avg=238.03, stdev=75.41 00:08:32.461 clat percentiles (usec): 00:08:32.461 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 190], 00:08:32.461 | 30.00th=[ 196], 40.00th=[ 204], 50.00th=[ 217], 60.00th=[ 225], 00:08:32.461 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 269], 95.00th=[ 297], 00:08:32.461 | 99.00th=[ 502], 99.50th=[ 725], 99.90th=[ 1237], 99.95th=[ 1647], 00:08:32.461 | 99.99th=[ 1647] 00:08:32.461 bw ( KiB/s): min= 4096, max= 4096, per=22.93%, avg=4096.00, stdev= 0.00, samples=1 00:08:32.461 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:32.461 lat (usec) : 250=49.51%, 500=48.50%, 750=1.52%, 1000=0.16% 00:08:32.461 lat (msec) : 2=0.08%, 50=0.23% 00:08:32.461 cpu : usr=2.00%, sys=3.10%, ctx=2575, majf=0, minf=2 00:08:32.461 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:32.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.461 issued rwts: total=1037,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.461 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:32.461 job3: (groupid=0, jobs=1): err= 0: pid=3080931: Wed Jul 24 18:51:09 2024 00:08:32.461 read: IOPS=624, BW=2500KiB/s (2559kB/s)(2552KiB/1021msec) 00:08:32.461 slat (nsec): min=4855, max=50750, avg=12236.92, stdev=4990.98 00:08:32.461 clat (usec): min=308, max=41047, avg=1137.50, stdev=5315.83 00:08:32.461 lat (usec): min=315, max=41061, avg=1149.73, stdev=5316.45 00:08:32.461 clat percentiles (usec): 00:08:32.461 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 334], 20.00th=[ 347], 00:08:32.461 | 30.00th=[ 371], 40.00th=[ 388], 50.00th=[ 400], 60.00th=[ 429], 00:08:32.461 | 70.00th=[ 457], 80.00th=[ 478], 90.00th=[ 506], 95.00th=[ 529], 00:08:32.461 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:32.461 | 99.99th=[41157] 00:08:32.461 write: IOPS=1002, BW=4012KiB/s (4108kB/s)(4096KiB/1021msec); 0 zone resets 00:08:32.461 slat (nsec): min=6438, max=38314, avg=10507.88, stdev=5179.48 00:08:32.461 clat (usec): min=180, max=705, avg=264.50, stdev=74.55 00:08:32.461 lat (usec): min=188, max=721, avg=275.01, stdev=77.34 00:08:32.461 clat percentiles (usec): 00:08:32.461 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 210], 00:08:32.461 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 239], 60.00th=[ 247], 00:08:32.461 | 70.00th=[ 269], 80.00th=[ 314], 90.00th=[ 367], 95.00th=[ 424], 00:08:32.461 | 99.00th=[ 523], 99.50th=[ 553], 99.90th=[ 586], 99.95th=[ 709], 00:08:32.461 | 99.99th=[ 709] 00:08:32.461 bw ( KiB/s): min= 4040, max= 4152, per=22.93%, avg=4096.00, stdev=79.20, samples=2 00:08:32.461 iops : min= 1010, max= 1038, avg=1024.00, stdev=19.80, samples=2 00:08:32.461 lat (usec) : 250=37.91%, 500=56.56%, 750=4.81% 00:08:32.461 lat (msec) : 20=0.06%, 50=0.66% 00:08:32.461 cpu : usr=0.88%, sys=1.86%, ctx=1664, majf=0, minf=1 00:08:32.461 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:32.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.461 issued rwts: total=638,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.461 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:32.461 00:08:32.461 Run status group 0 (all jobs): 00:08:32.461 READ: bw=10.3MiB/s (10.8MB/s), 2016KiB/s-4144KiB/s (2064kB/s-4243kB/s), io=10.6MiB (11.1MB), run=1001-1032msec 00:08:32.461 WRITE: bw=17.4MiB/s (18.3MB/s), 3969KiB/s-6138KiB/s (4064kB/s-6285kB/s), io=18.0MiB (18.9MB), run=1001-1032msec 00:08:32.461 00:08:32.461 Disk stats (read/write): 00:08:32.461 nvme0n1: ios=562/1024, merge=0/0, ticks=1363/241, in_queue=1604, util=97.39% 00:08:32.461 nvme0n2: ios=562/1024, merge=0/0, ticks=671/254, in_queue=925, util=95.32% 00:08:32.461 nvme0n3: ios=931/1024, merge=0/0, ticks=1511/256, in_queue=1767, util=97.70% 00:08:32.461 nvme0n4: ios=680/1024, merge=0/0, ticks=1305/259, in_queue=1564, util=97.26% 00:08:32.461 18:51:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:32.461 [global] 00:08:32.461 thread=1 00:08:32.461 invalidate=1 00:08:32.461 rw=write 00:08:32.461 time_based=1 00:08:32.461 runtime=1 00:08:32.461 ioengine=libaio 00:08:32.461 direct=1 00:08:32.461 bs=4096 00:08:32.461 iodepth=128 00:08:32.461 norandommap=0 00:08:32.461 numjobs=1 00:08:32.461 00:08:32.461 verify_dump=1 00:08:32.461 verify_backlog=512 00:08:32.461 verify_state_save=0 00:08:32.461 do_verify=1 00:08:32.461 verify=crc32c-intel 00:08:32.461 [job0] 00:08:32.461 filename=/dev/nvme0n1 00:08:32.461 [job1] 00:08:32.461 filename=/dev/nvme0n2 00:08:32.461 [job2] 00:08:32.461 filename=/dev/nvme0n3 00:08:32.461 [job3] 00:08:32.461 filename=/dev/nvme0n4 00:08:32.461 Could not set queue depth (nvme0n1) 00:08:32.461 Could not set queue depth (nvme0n2) 00:08:32.461 Could not set queue depth (nvme0n3) 00:08:32.461 Could not set queue depth (nvme0n4) 00:08:32.461 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:32.461 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:32.461 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:32.461 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:32.461 fio-3.35 00:08:32.461 Starting 4 threads 00:08:33.835 00:08:33.835 job0: (groupid=0, jobs=1): err= 0: pid=3081380: Wed Jul 24 18:51:11 2024 00:08:33.835 read: IOPS=3324, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1002msec) 00:08:33.835 slat (usec): min=2, max=15221, avg=137.87, stdev=885.95 00:08:33.835 clat (usec): min=577, max=64787, avg=17900.73, stdev=7196.53 00:08:33.835 lat (usec): min=2610, max=67304, avg=18038.60, stdev=7250.66 00:08:33.835 clat percentiles (usec): 00:08:33.835 | 1.00th=[ 5604], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[11994], 00:08:33.835 | 30.00th=[12780], 40.00th=[15008], 50.00th=[17433], 60.00th=[19268], 00:08:33.835 | 70.00th=[21627], 80.00th=[23462], 90.00th=[25822], 95.00th=[28181], 00:08:33.835 | 99.00th=[34866], 99.50th=[64750], 99.90th=[64750], 99.95th=[64750], 00:08:33.835 | 99.99th=[64750] 00:08:33.835 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:08:33.835 slat (usec): min=3, max=25239, avg=143.73, stdev=1003.50 00:08:33.835 clat (usec): min=792, max=70040, avg=18078.91, stdev=9842.09 00:08:33.835 lat (usec): min=802, max=70054, avg=18222.64, stdev=9894.22 00:08:33.835 clat percentiles (usec): 00:08:33.835 | 1.00th=[ 8717], 5.00th=[10028], 10.00th=[10683], 20.00th=[11338], 00:08:33.835 | 30.00th=[11994], 40.00th=[13173], 50.00th=[14746], 60.00th=[17695], 00:08:33.835 | 70.00th=[20841], 80.00th=[23462], 90.00th=[25035], 95.00th=[39584], 00:08:33.835 | 99.00th=[63177], 99.50th=[66323], 99.90th=[69731], 99.95th=[69731], 00:08:33.835 | 99.99th=[69731] 00:08:33.835 bw ( KiB/s): min=13320, max=15352, per=25.18%, avg=14336.00, stdev=1436.84, samples=2 00:08:33.835 iops : min= 3330, max= 3838, avg=3584.00, stdev=359.21, samples=2 00:08:33.835 lat (usec) : 750=0.01%, 1000=0.03% 00:08:33.835 lat (msec) : 4=0.46%, 10=6.12%, 20=58.79%, 50=33.12%, 100=1.48% 00:08:33.835 cpu : usr=3.10%, sys=5.09%, ctx=357, majf=0, minf=13 00:08:33.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:08:33.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:33.835 issued rwts: total=3331,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:33.835 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:33.835 job1: (groupid=0, jobs=1): err= 0: pid=3081381: Wed Jul 24 18:51:11 2024 00:08:33.835 read: IOPS=3119, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1012msec) 00:08:33.835 slat (usec): min=2, max=21655, avg=153.01, stdev=1104.68 00:08:33.835 clat (usec): min=1085, max=128643, avg=18843.75, stdev=15024.03 00:08:33.835 lat (msec): min=7, max=128, avg=19.00, stdev=15.12 00:08:33.835 clat percentiles (msec): 00:08:33.835 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:08:33.835 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 17], 00:08:33.835 | 70.00th=[ 18], 80.00th=[ 22], 90.00th=[ 26], 95.00th=[ 49], 00:08:33.835 | 99.00th=[ 96], 99.50th=[ 100], 99.90th=[ 117], 99.95th=[ 117], 00:08:33.835 | 99.99th=[ 129] 00:08:33.835 write: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec); 0 zone resets 00:08:33.835 slat (usec): min=3, max=13028, avg=136.50, stdev=771.16 00:08:33.835 clat (usec): min=1084, max=65915, avg=19211.25, stdev=11281.32 00:08:33.835 lat (usec): min=1103, max=65923, avg=19347.75, stdev=11362.75 00:08:33.835 clat percentiles (usec): 00:08:33.835 | 1.00th=[ 5735], 5.00th=[ 7963], 10.00th=[10028], 20.00th=[10814], 00:08:33.835 | 30.00th=[12649], 40.00th=[13435], 50.00th=[15270], 60.00th=[17433], 00:08:33.835 | 70.00th=[21365], 80.00th=[26084], 90.00th=[34341], 95.00th=[39584], 00:08:33.835 | 99.00th=[63177], 99.50th=[63701], 99.90th=[65799], 99.95th=[65799], 00:08:33.835 | 99.99th=[65799] 00:08:33.835 bw ( KiB/s): min=12624, max=15735, per=24.90%, avg=14179.50, stdev=2199.81, samples=2 00:08:33.835 iops : min= 3156, max= 3933, avg=3544.50, stdev=549.42, samples=2 00:08:33.835 lat (msec) : 2=0.04%, 10=7.74%, 20=64.52%, 50=23.63%, 100=3.84% 00:08:33.835 lat (msec) : 250=0.22% 00:08:33.835 cpu : usr=3.46%, sys=6.23%, ctx=330, majf=0, minf=11 00:08:33.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:08:33.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:33.835 issued rwts: total=3157,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:33.835 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:33.835 job2: (groupid=0, jobs=1): err= 0: pid=3081382: Wed Jul 24 18:51:11 2024 00:08:33.835 read: IOPS=4164, BW=16.3MiB/s (17.1MB/s)(16.5MiB/1012msec) 00:08:33.835 slat (usec): min=3, max=14627, avg=115.30, stdev=780.04 00:08:33.835 clat (usec): min=4047, max=48089, avg=14705.16, stdev=5313.11 00:08:33.835 lat (usec): min=4054, max=48107, avg=14820.46, stdev=5364.10 00:08:33.835 clat percentiles (usec): 00:08:33.835 | 1.00th=[ 6325], 5.00th=[10552], 10.00th=[10814], 20.00th=[11207], 00:08:33.835 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13304], 60.00th=[13960], 00:08:33.835 | 70.00th=[15008], 80.00th=[17171], 90.00th=[20579], 95.00th=[24249], 00:08:33.835 | 99.00th=[37487], 99.50th=[38536], 99.90th=[47973], 99.95th=[47973], 00:08:33.835 | 99.99th=[47973] 00:08:33.835 write: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1012msec); 0 zone resets 00:08:33.835 slat (usec): min=4, max=14530, avg=102.47, stdev=621.42 00:08:33.835 clat (usec): min=1172, max=49373, avg=14400.63, stdev=6746.54 00:08:33.835 lat (usec): min=1181, max=49406, avg=14503.10, stdev=6780.71 00:08:33.835 clat percentiles (usec): 00:08:33.835 | 1.00th=[ 3064], 5.00th=[ 6587], 10.00th=[ 7963], 20.00th=[ 9372], 00:08:33.835 | 30.00th=[11600], 40.00th=[12256], 50.00th=[12780], 60.00th=[13566], 00:08:33.835 | 70.00th=[15270], 80.00th=[18482], 90.00th=[23200], 95.00th=[27395], 00:08:33.835 | 99.00th=[37487], 99.50th=[41157], 99.90th=[49546], 99.95th=[49546], 00:08:33.835 | 99.99th=[49546] 00:08:33.835 bw ( KiB/s): min=17592, max=19192, per=32.30%, avg=18392.00, stdev=1131.37, samples=2 00:08:33.835 iops : min= 4398, max= 4798, avg=4598.00, stdev=282.84, samples=2 00:08:33.835 lat (msec) : 2=0.12%, 4=0.85%, 10=12.29%, 20=73.00%, 50=13.74% 00:08:33.835 cpu : usr=5.14%, sys=9.30%, ctx=446, majf=0, minf=11 00:08:33.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:08:33.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:33.835 issued rwts: total=4214,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:33.835 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:33.835 job3: (groupid=0, jobs=1): err= 0: pid=3081383: Wed Jul 24 18:51:11 2024 00:08:33.835 read: IOPS=2866, BW=11.2MiB/s (11.7MB/s)(11.7MiB/1043msec) 00:08:33.835 slat (usec): min=2, max=12495, avg=145.22, stdev=854.45 00:08:33.835 clat (usec): min=10130, max=55747, avg=19708.64, stdev=8556.76 00:08:33.835 lat (usec): min=10180, max=55776, avg=19853.86, stdev=8584.50 00:08:33.835 clat percentiles (usec): 00:08:33.835 | 1.00th=[10421], 5.00th=[11731], 10.00th=[13042], 20.00th=[13698], 00:08:33.835 | 30.00th=[14222], 40.00th=[15008], 50.00th=[17695], 60.00th=[18744], 00:08:33.835 | 70.00th=[22938], 80.00th=[23987], 90.00th=[28181], 95.00th=[38536], 00:08:33.835 | 99.00th=[55313], 99.50th=[55313], 99.90th=[55837], 99.95th=[55837], 00:08:33.835 | 99.99th=[55837] 00:08:33.835 write: IOPS=2945, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1043msec); 0 zone resets 00:08:33.835 slat (usec): min=3, max=41951, avg=176.18, stdev=1263.86 00:08:33.835 clat (usec): min=7967, max=68628, avg=20818.59, stdev=6367.52 00:08:33.835 lat (usec): min=7979, max=90758, avg=20994.76, stdev=6552.11 00:08:33.835 clat percentiles (usec): 00:08:33.835 | 1.00th=[10159], 5.00th=[11207], 10.00th=[13042], 20.00th=[14484], 00:08:33.835 | 30.00th=[15139], 40.00th=[18220], 50.00th=[22414], 60.00th=[23462], 00:08:33.835 | 70.00th=[24249], 80.00th=[25822], 90.00th=[29230], 95.00th=[31589], 00:08:33.835 | 99.00th=[34341], 99.50th=[34341], 99.90th=[38536], 99.95th=[41157], 00:08:33.835 | 99.99th=[68682] 00:08:33.835 bw ( KiB/s): min=11944, max=12632, per=21.58%, avg=12288.00, stdev=486.49, samples=2 00:08:33.835 iops : min= 2986, max= 3158, avg=3072.00, stdev=121.62, samples=2 00:08:33.835 lat (msec) : 10=0.08%, 20=53.99%, 50=44.52%, 100=1.40% 00:08:33.835 cpu : usr=2.59%, sys=4.70%, ctx=275, majf=0, minf=17 00:08:33.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:08:33.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:33.835 issued rwts: total=2990,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:33.835 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:33.835 00:08:33.835 Run status group 0 (all jobs): 00:08:33.835 READ: bw=51.3MiB/s (53.8MB/s), 11.2MiB/s-16.3MiB/s (11.7MB/s-17.1MB/s), io=53.5MiB (56.1MB), run=1002-1043msec 00:08:33.835 WRITE: bw=55.6MiB/s (58.3MB/s), 11.5MiB/s-17.8MiB/s (12.1MB/s-18.7MB/s), io=58.0MiB (60.8MB), run=1002-1043msec 00:08:33.835 00:08:33.835 Disk stats (read/write): 00:08:33.835 nvme0n1: ios=2712/3072, merge=0/0, ticks=17874/18844, in_queue=36718, util=89.58% 00:08:33.835 nvme0n2: ios=2747/3072, merge=0/0, ticks=30246/39866, in_queue=70112, util=98.17% 00:08:33.835 nvme0n3: ios=3601/4096, merge=0/0, ticks=50848/52380, in_queue=103228, util=93.84% 00:08:33.835 nvme0n4: ios=2251/2560, merge=0/0, ticks=13121/16182, in_queue=29303, util=100.00% 00:08:33.835 18:51:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:33.835 [global] 00:08:33.835 thread=1 00:08:33.835 invalidate=1 00:08:33.835 rw=randwrite 00:08:33.835 time_based=1 00:08:33.835 runtime=1 00:08:33.835 ioengine=libaio 00:08:33.835 direct=1 00:08:33.835 bs=4096 00:08:33.835 iodepth=128 00:08:33.835 norandommap=0 00:08:33.835 numjobs=1 00:08:33.835 00:08:33.835 verify_dump=1 00:08:33.835 verify_backlog=512 00:08:33.835 verify_state_save=0 00:08:33.836 do_verify=1 00:08:33.836 verify=crc32c-intel 00:08:33.836 [job0] 00:08:33.836 filename=/dev/nvme0n1 00:08:33.836 [job1] 00:08:33.836 filename=/dev/nvme0n2 00:08:33.836 [job2] 00:08:33.836 filename=/dev/nvme0n3 00:08:33.836 [job3] 00:08:33.836 filename=/dev/nvme0n4 00:08:33.836 Could not set queue depth (nvme0n1) 00:08:33.836 Could not set queue depth (nvme0n2) 00:08:33.836 Could not set queue depth (nvme0n3) 00:08:33.836 Could not set queue depth (nvme0n4) 00:08:34.093 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:34.093 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:34.093 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:34.093 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:34.093 fio-3.35 00:08:34.093 Starting 4 threads 00:08:35.465 00:08:35.465 job0: (groupid=0, jobs=1): err= 0: pid=3081619: Wed Jul 24 18:51:12 2024 00:08:35.465 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:08:35.465 slat (usec): min=3, max=4415, avg=101.54, stdev=513.08 00:08:35.465 clat (usec): min=9069, max=22329, avg=13207.10, stdev=1408.75 00:08:35.465 lat (usec): min=9075, max=23064, avg=13308.65, stdev=1415.02 00:08:35.465 clat percentiles (usec): 00:08:35.465 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[11338], 20.00th=[12387], 00:08:35.465 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13435], 00:08:35.465 | 70.00th=[13698], 80.00th=[14222], 90.00th=[14615], 95.00th=[15270], 00:08:35.465 | 99.00th=[16909], 99.50th=[19268], 99.90th=[22152], 99.95th=[22414], 00:08:35.465 | 99.99th=[22414] 00:08:35.465 write: IOPS=4568, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:08:35.465 slat (usec): min=3, max=21644, avg=118.08, stdev=785.41 00:08:35.465 clat (usec): min=941, max=56949, avg=15566.82, stdev=6224.79 00:08:35.465 lat (usec): min=8375, max=56981, avg=15684.90, stdev=6270.39 00:08:35.465 clat percentiles (usec): 00:08:35.465 | 1.00th=[ 9110], 5.00th=[10683], 10.00th=[11731], 20.00th=[12518], 00:08:35.465 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13829], 00:08:35.465 | 70.00th=[14484], 80.00th=[17171], 90.00th=[23725], 95.00th=[32637], 00:08:35.465 | 99.00th=[40633], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:35.465 | 99.99th=[56886] 00:08:35.465 bw ( KiB/s): min=16384, max=19432, per=24.13%, avg=17908.00, stdev=2155.26, samples=2 00:08:35.465 iops : min= 4096, max= 4858, avg=4477.00, stdev=538.82, samples=2 00:08:35.465 lat (usec) : 1000=0.01% 00:08:35.465 lat (msec) : 10=2.34%, 20=90.30%, 50=7.33%, 100=0.01% 00:08:35.465 cpu : usr=5.46%, sys=9.33%, ctx=391, majf=0, minf=1 00:08:35.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:08:35.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:35.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:35.465 issued rwts: total=4096,4605,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:35.465 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:35.465 job1: (groupid=0, jobs=1): err= 0: pid=3081620: Wed Jul 24 18:51:12 2024 00:08:35.465 read: IOPS=4995, BW=19.5MiB/s (20.5MB/s)(19.7MiB/1009msec) 00:08:35.465 slat (usec): min=2, max=11374, avg=104.05, stdev=720.59 00:08:35.465 clat (usec): min=4512, max=30489, avg=13231.65, stdev=3669.50 00:08:35.465 lat (usec): min=4534, max=30495, avg=13335.70, stdev=3706.07 00:08:35.465 clat percentiles (usec): 00:08:35.465 | 1.00th=[ 6325], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[10683], 00:08:35.465 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11863], 60.00th=[12911], 00:08:35.465 | 70.00th=[14222], 80.00th=[16057], 90.00th=[19006], 95.00th=[20579], 00:08:35.465 | 99.00th=[23200], 99.50th=[26084], 99.90th=[30540], 99.95th=[30540], 00:08:35.465 | 99.99th=[30540] 00:08:35.466 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:08:35.466 slat (usec): min=3, max=10434, avg=82.30, stdev=445.63 00:08:35.466 clat (usec): min=1080, max=26865, avg=11964.51, stdev=4140.88 00:08:35.466 lat (usec): min=1089, max=28800, avg=12046.81, stdev=4163.05 00:08:35.466 clat percentiles (usec): 00:08:35.466 | 1.00th=[ 4146], 5.00th=[ 6259], 10.00th=[ 7177], 20.00th=[ 8225], 00:08:35.466 | 30.00th=[10552], 40.00th=[11338], 50.00th=[11994], 60.00th=[12256], 00:08:35.466 | 70.00th=[12649], 80.00th=[13960], 90.00th=[16909], 95.00th=[20841], 00:08:35.466 | 99.00th=[25297], 99.50th=[25822], 99.90th=[26346], 99.95th=[26346], 00:08:35.466 | 99.99th=[26870] 00:08:35.466 bw ( KiB/s): min=20480, max=20480, per=27.59%, avg=20480.00, stdev= 0.00, samples=2 00:08:35.466 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:08:35.466 lat (msec) : 2=0.02%, 4=0.33%, 10=19.25%, 20=73.31%, 50=7.09% 00:08:35.466 cpu : usr=7.14%, sys=9.82%, ctx=518, majf=0, minf=1 00:08:35.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:35.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:35.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:35.466 issued rwts: total=5040,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:35.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:35.466 job2: (groupid=0, jobs=1): err= 0: pid=3081621: Wed Jul 24 18:51:12 2024 00:08:35.466 read: IOPS=3940, BW=15.4MiB/s (16.1MB/s)(15.5MiB/1005msec) 00:08:35.466 slat (usec): min=3, max=7927, avg=121.94, stdev=672.06 00:08:35.466 clat (usec): min=829, max=25810, avg=15655.95, stdev=2243.51 00:08:35.466 lat (usec): min=6041, max=25816, avg=15777.89, stdev=2284.57 00:08:35.466 clat percentiles (usec): 00:08:35.466 | 1.00th=[ 6390], 5.00th=[11863], 10.00th=[13304], 20.00th=[14746], 00:08:35.466 | 30.00th=[15139], 40.00th=[15270], 50.00th=[15664], 60.00th=[16057], 00:08:35.466 | 70.00th=[16450], 80.00th=[16909], 90.00th=[18220], 95.00th=[19006], 00:08:35.466 | 99.00th=[21103], 99.50th=[21890], 99.90th=[23462], 99.95th=[23987], 00:08:35.466 | 99.99th=[25822] 00:08:35.466 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:08:35.466 slat (usec): min=4, max=7700, avg=116.48, stdev=618.80 00:08:35.466 clat (usec): min=7054, max=40377, avg=15861.39, stdev=3745.47 00:08:35.466 lat (usec): min=7061, max=40407, avg=15977.87, stdev=3774.77 00:08:35.466 clat percentiles (usec): 00:08:35.466 | 1.00th=[ 9372], 5.00th=[12256], 10.00th=[12780], 20.00th=[14091], 00:08:35.466 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15401], 60.00th=[15926], 00:08:35.466 | 70.00th=[16188], 80.00th=[16712], 90.00th=[18482], 95.00th=[20841], 00:08:35.466 | 99.00th=[34341], 99.50th=[38011], 99.90th=[40109], 99.95th=[40633], 00:08:35.466 | 99.99th=[40633] 00:08:35.466 bw ( KiB/s): min=16384, max=16384, per=22.07%, avg=16384.00, stdev= 0.00, samples=2 00:08:35.466 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:08:35.466 lat (usec) : 1000=0.01% 00:08:35.466 lat (msec) : 10=2.10%, 20=93.45%, 50=4.44% 00:08:35.466 cpu : usr=5.28%, sys=8.86%, ctx=403, majf=0, minf=1 00:08:35.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:08:35.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:35.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:35.466 issued rwts: total=3960,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:35.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:35.466 job3: (groupid=0, jobs=1): err= 0: pid=3081622: Wed Jul 24 18:51:12 2024 00:08:35.466 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:08:35.466 slat (usec): min=3, max=6234, avg=103.09, stdev=607.02 00:08:35.466 clat (usec): min=7039, max=19850, avg=13278.99, stdev=1569.59 00:08:35.466 lat (usec): min=8158, max=20245, avg=13382.08, stdev=1623.96 00:08:35.466 clat percentiles (usec): 00:08:35.466 | 1.00th=[ 9110], 5.00th=[10421], 10.00th=[11600], 20.00th=[12256], 00:08:35.466 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13435], 00:08:35.466 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14877], 95.00th=[16188], 00:08:35.466 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19268], 99.95th=[19268], 00:08:35.466 | 99.99th=[19792] 00:08:35.466 write: IOPS=4873, BW=19.0MiB/s (20.0MB/s)(19.2MiB/1006msec); 0 zone resets 00:08:35.466 slat (usec): min=3, max=6901, avg=96.33, stdev=507.25 00:08:35.466 clat (usec): min=5486, max=22895, avg=13387.82, stdev=1875.54 00:08:35.466 lat (usec): min=6109, max=22912, avg=13484.16, stdev=1874.06 00:08:35.466 clat percentiles (usec): 00:08:35.466 | 1.00th=[ 6980], 5.00th=[10290], 10.00th=[11469], 20.00th=[12518], 00:08:35.466 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13698], 00:08:35.466 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14746], 95.00th=[16909], 00:08:35.466 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19268], 99.95th=[21365], 00:08:35.466 | 99.99th=[22938] 00:08:35.466 bw ( KiB/s): min=17728, max=20480, per=25.74%, avg=19104.00, stdev=1945.96, samples=2 00:08:35.466 iops : min= 4432, max= 5120, avg=4776.00, stdev=486.49, samples=2 00:08:35.466 lat (msec) : 10=3.88%, 20=96.09%, 50=0.03% 00:08:35.466 cpu : usr=5.77%, sys=11.14%, ctx=466, majf=0, minf=1 00:08:35.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:08:35.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:35.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:35.466 issued rwts: total=4608,4903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:35.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:35.466 00:08:35.466 Run status group 0 (all jobs): 00:08:35.466 READ: bw=68.5MiB/s (71.9MB/s), 15.4MiB/s-19.5MiB/s (16.1MB/s-20.5MB/s), io=69.2MiB (72.5MB), run=1005-1009msec 00:08:35.466 WRITE: bw=72.5MiB/s (76.0MB/s), 15.9MiB/s-19.8MiB/s (16.7MB/s-20.8MB/s), io=73.1MiB (76.7MB), run=1005-1009msec 00:08:35.466 00:08:35.466 Disk stats (read/write): 00:08:35.466 nvme0n1: ios=3634/3673, merge=0/0, ticks=16077/18089, in_queue=34166, util=93.69% 00:08:35.466 nvme0n2: ios=4139/4364, merge=0/0, ticks=53041/51448, in_queue=104489, util=91.17% 00:08:35.466 nvme0n3: ios=3226/3584, merge=0/0, ticks=25106/26404, in_queue=51510, util=93.43% 00:08:35.466 nvme0n4: ios=3959/4096, merge=0/0, ticks=25667/24906, in_queue=50573, util=98.95% 00:08:35.466 18:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:35.466 18:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3081758 00:08:35.466 18:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:35.466 18:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:35.466 [global] 00:08:35.466 thread=1 00:08:35.466 invalidate=1 00:08:35.466 rw=read 00:08:35.466 time_based=1 00:08:35.466 runtime=10 00:08:35.466 ioengine=libaio 00:08:35.466 direct=1 00:08:35.466 bs=4096 00:08:35.466 iodepth=1 00:08:35.466 norandommap=1 00:08:35.466 numjobs=1 00:08:35.466 00:08:35.466 [job0] 00:08:35.466 filename=/dev/nvme0n1 00:08:35.466 [job1] 00:08:35.466 filename=/dev/nvme0n2 00:08:35.466 [job2] 00:08:35.466 filename=/dev/nvme0n3 00:08:35.466 [job3] 00:08:35.466 filename=/dev/nvme0n4 00:08:35.466 Could not set queue depth (nvme0n1) 00:08:35.466 Could not set queue depth (nvme0n2) 00:08:35.466 Could not set queue depth (nvme0n3) 00:08:35.466 Could not set queue depth (nvme0n4) 00:08:35.466 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:35.466 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:35.466 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:35.466 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:35.466 fio-3.35 00:08:35.466 Starting 4 threads 00:08:38.741 18:51:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:38.741 18:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:38.741 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=8990720, buflen=4096 00:08:38.741 fio: pid=3081850, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:08:38.741 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=17248256, buflen=4096 00:08:38.741 fio: pid=3081849, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:08:38.741 18:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:38.741 18:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:38.999 18:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:38.999 18:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:38.999 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=348160, buflen=4096 00:08:38.999 fio: pid=3081847, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:08:39.565 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=32550912, buflen=4096 00:08:39.565 fio: pid=3081848, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:08:39.565 18:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:39.565 18:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:39.565 00:08:39.565 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3081847: Wed Jul 24 18:51:16 2024 00:08:39.565 read: IOPS=24, BW=96.8KiB/s (99.1kB/s)(340KiB/3513msec) 00:08:39.565 slat (usec): min=12, max=4861, avg=87.43, stdev=528.25 00:08:39.565 clat (usec): min=40541, max=41147, avg=40972.28, stdev=72.04 00:08:39.565 lat (usec): min=40578, max=45977, avg=41060.58, stdev=554.13 00:08:39.565 clat percentiles (usec): 00:08:39.565 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:08:39.565 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:39.565 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:39.565 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:39.565 | 99.99th=[41157] 00:08:39.565 bw ( KiB/s): min= 96, max= 104, per=0.64%, avg=97.33, stdev= 3.27, samples=6 00:08:39.565 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:08:39.565 lat (msec) : 50=98.84% 00:08:39.565 cpu : usr=0.09%, sys=0.00%, ctx=89, majf=0, minf=1 00:08:39.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:39.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.565 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.565 issued rwts: total=86,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:39.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:39.565 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3081848: Wed Jul 24 18:51:16 2024 00:08:39.565 read: IOPS=2098, BW=8394KiB/s (8595kB/s)(31.0MiB/3787msec) 00:08:39.565 slat (usec): min=4, max=12914, avg=23.33, stdev=320.42 00:08:39.565 clat (usec): min=250, max=42031, avg=446.55, stdev=2219.33 00:08:39.565 lat (usec): min=257, max=42049, avg=469.89, stdev=2242.69 00:08:39.565 clat percentiles (usec): 00:08:39.565 | 1.00th=[ 262], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 285], 00:08:39.565 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 314], 60.00th=[ 326], 00:08:39.565 | 70.00th=[ 338], 80.00th=[ 367], 90.00th=[ 388], 95.00th=[ 400], 00:08:39.565 | 99.00th=[ 510], 99.50th=[ 562], 99.90th=[41157], 99.95th=[41681], 00:08:39.565 | 99.99th=[42206] 00:08:39.565 bw ( KiB/s): min= 184, max=12296, per=54.32%, avg=8284.00, stdev=5407.32, samples=7 00:08:39.565 iops : min= 46, max= 3074, avg=2071.00, stdev=1351.83, samples=7 00:08:39.565 lat (usec) : 500=98.60%, 750=1.07% 00:08:39.565 lat (msec) : 20=0.01%, 50=0.30% 00:08:39.565 cpu : usr=1.40%, sys=3.70%, ctx=7958, majf=0, minf=1 00:08:39.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:39.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.565 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.565 issued rwts: total=7948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:39.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:39.565 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3081849: Wed Jul 24 18:51:16 2024 00:08:39.565 read: IOPS=1314, BW=5256KiB/s (5382kB/s)(16.4MiB/3205msec) 00:08:39.565 slat (usec): min=4, max=14572, avg=21.03, stdev=286.29 00:08:39.565 clat (usec): min=283, max=42059, avg=730.12, stdev=3640.09 00:08:39.565 lat (usec): min=293, max=42077, avg=751.14, stdev=3651.56 00:08:39.565 clat percentiles (usec): 00:08:39.565 | 1.00th=[ 306], 5.00th=[ 322], 10.00th=[ 338], 20.00th=[ 363], 00:08:39.565 | 30.00th=[ 379], 40.00th=[ 388], 50.00th=[ 400], 60.00th=[ 408], 00:08:39.565 | 70.00th=[ 429], 80.00th=[ 445], 90.00th=[ 474], 95.00th=[ 502], 00:08:39.565 | 99.00th=[ 627], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:08:39.565 | 99.99th=[42206] 00:08:39.565 bw ( KiB/s): min= 96, max= 9632, per=33.08%, avg=5045.33, stdev=4784.83, samples=6 00:08:39.565 iops : min= 24, max= 2408, avg=1261.33, stdev=1196.21, samples=6 00:08:39.565 lat (usec) : 500=94.78%, 750=4.37% 00:08:39.565 lat (msec) : 2=0.02%, 20=0.02%, 50=0.78% 00:08:39.565 cpu : usr=0.81%, sys=3.03%, ctx=4216, majf=0, minf=1 00:08:39.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:39.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.565 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.565 issued rwts: total=4212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:39.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:39.565 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3081850: Wed Jul 24 18:51:16 2024 00:08:39.565 read: IOPS=747, BW=2987KiB/s (3059kB/s)(8780KiB/2939msec) 00:08:39.565 slat (nsec): min=4571, max=62310, avg=12965.16, stdev=7797.85 00:08:39.565 clat (usec): min=254, max=41475, avg=1310.14, stdev=6151.88 00:08:39.565 lat (usec): min=261, max=41482, avg=1323.11, stdev=6153.84 00:08:39.565 clat percentiles (usec): 00:08:39.565 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 318], 00:08:39.565 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 359], 00:08:39.565 | 70.00th=[ 375], 80.00th=[ 396], 90.00th=[ 445], 95.00th=[ 474], 00:08:39.565 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:39.565 | 99.99th=[41681] 00:08:39.565 bw ( KiB/s): min= 96, max= 4208, per=11.34%, avg=1729.60, stdev=2026.32, samples=5 00:08:39.565 iops : min= 24, max= 1052, avg=432.40, stdev=506.58, samples=5 00:08:39.565 lat (usec) : 500=96.54%, 750=1.05% 00:08:39.565 lat (msec) : 50=2.37% 00:08:39.565 cpu : usr=0.48%, sys=1.29%, ctx=2196, majf=0, minf=1 00:08:39.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:39.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.565 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.565 issued rwts: total=2196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:39.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:39.565 00:08:39.565 Run status group 0 (all jobs): 00:08:39.565 READ: bw=14.9MiB/s (15.6MB/s), 96.8KiB/s-8394KiB/s (99.1kB/s-8595kB/s), io=56.4MiB (59.1MB), run=2939-3787msec 00:08:39.565 00:08:39.565 Disk stats (read/write): 00:08:39.565 nvme0n1: ios=81/0, merge=0/0, ticks=3321/0, in_queue=3321, util=95.88% 00:08:39.565 nvme0n2: ios=7590/0, merge=0/0, ticks=4485/0, in_queue=4485, util=98.26% 00:08:39.565 nvme0n3: ios=4059/0, merge=0/0, ticks=4113/0, in_queue=4113, util=98.38% 00:08:39.565 nvme0n4: ios=2033/0, merge=0/0, ticks=2793/0, in_queue=2793, util=96.75% 00:08:39.565 18:51:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:39.565 18:51:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:08:39.823 18:51:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:39.823 18:51:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:08:40.080 18:51:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:40.080 18:51:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:08:40.338 18:51:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:40.338 18:51:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:08:40.595 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:08:40.595 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3081758 00:08:40.595 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:08:40.595 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:40.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.852 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:40.852 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:08:40.852 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:40.852 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:40.852 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:40.852 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:40.852 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:08:40.852 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:08:40.852 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:08:40.852 nvmf hotplug test: fio failed as expected 00:08:40.852 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:41.110 rmmod nvme_tcp 00:08:41.110 rmmod nvme_fabrics 00:08:41.110 rmmod nvme_keyring 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3079108 ']' 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3079108 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3079108 ']' 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3079108 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3079108 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3079108' 00:08:41.110 killing process with pid 3079108 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3079108 00:08:41.110 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3079108 00:08:41.368 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:41.368 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:41.368 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:41.368 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:41.368 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:41.368 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.368 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.368 18:51:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.900 18:51:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:43.900 00:08:43.900 real 0m23.754s 00:08:43.900 user 1m21.930s 00:08:43.900 sys 0m7.037s 00:08:43.900 18:51:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.900 18:51:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:43.900 ************************************ 00:08:43.900 END TEST nvmf_fio_target 00:08:43.900 ************************************ 00:08:43.900 18:51:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:43.900 18:51:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:43.900 18:51:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.900 18:51:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:43.900 ************************************ 00:08:43.900 START TEST nvmf_bdevio 00:08:43.900 ************************************ 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:43.900 * Looking for test storage... 00:08:43.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.900 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:08:43.901 18:51:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:45.806 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.806 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:08:45.806 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:45.806 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:45.806 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:45.806 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:45.806 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:45.806 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:45.807 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:45.807 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:45.807 Found net devices under 0000:09:00.0: cvl_0_0 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:45.807 Found net devices under 0000:09:00.1: cvl_0_1 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:45.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:08:45.807 00:08:45.807 --- 10.0.0.2 ping statistics --- 00:08:45.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.807 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:08:45.807 00:08:45.807 --- 10.0.0.1 ping statistics --- 00:08:45.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.807 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:45.807 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:45.808 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3084486 00:08:45.808 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:08:45.808 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3084486 00:08:45.808 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3084486 ']' 00:08:45.808 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.808 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:45.808 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.808 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:45.808 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:45.808 [2024-07-24 18:51:23.394903] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:08:45.808 [2024-07-24 18:51:23.394990] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.065 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.065 [2024-07-24 18:51:23.460229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.065 [2024-07-24 18:51:23.570670] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.065 [2024-07-24 18:51:23.570728] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.065 [2024-07-24 18:51:23.570742] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.065 [2024-07-24 18:51:23.570752] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.065 [2024-07-24 18:51:23.570762] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.065 [2024-07-24 18:51:23.570861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:46.065 [2024-07-24 18:51:23.570934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:46.065 [2024-07-24 18:51:23.571000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:46.065 [2024-07-24 18:51:23.571003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.322 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:46.322 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:08:46.322 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:46.322 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:46.322 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:46.322 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.322 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:46.322 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.322 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:46.322 [2024-07-24 18:51:23.729689] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:46.323 Malloc0 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:46.323 [2024-07-24 18:51:23.783330] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:46.323 { 00:08:46.323 "params": { 00:08:46.323 "name": "Nvme$subsystem", 00:08:46.323 "trtype": "$TEST_TRANSPORT", 00:08:46.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:46.323 "adrfam": "ipv4", 00:08:46.323 "trsvcid": "$NVMF_PORT", 00:08:46.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:46.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:46.323 "hdgst": ${hdgst:-false}, 00:08:46.323 "ddgst": ${ddgst:-false} 00:08:46.323 }, 00:08:46.323 "method": "bdev_nvme_attach_controller" 00:08:46.323 } 00:08:46.323 EOF 00:08:46.323 )") 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:08:46.323 18:51:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:46.323 "params": { 00:08:46.323 "name": "Nvme1", 00:08:46.323 "trtype": "tcp", 00:08:46.323 "traddr": "10.0.0.2", 00:08:46.323 "adrfam": "ipv4", 00:08:46.323 "trsvcid": "4420", 00:08:46.323 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:46.323 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:46.323 "hdgst": false, 00:08:46.323 "ddgst": false 00:08:46.323 }, 00:08:46.323 "method": "bdev_nvme_attach_controller" 00:08:46.323 }' 00:08:46.323 [2024-07-24 18:51:23.831717] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:08:46.323 [2024-07-24 18:51:23.831782] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3084636 ] 00:08:46.323 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.323 [2024-07-24 18:51:23.892539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:46.580 [2024-07-24 18:51:24.010641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.580 [2024-07-24 18:51:24.010691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.580 [2024-07-24 18:51:24.010694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.838 I/O targets: 00:08:46.838 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:08:46.838 00:08:46.838 00:08:46.838 CUnit - A unit testing framework for C - Version 2.1-3 00:08:46.838 http://cunit.sourceforge.net/ 00:08:46.838 00:08:46.838 00:08:46.838 Suite: bdevio tests on: Nvme1n1 00:08:46.838 Test: blockdev write read block ...passed 00:08:46.838 Test: blockdev write zeroes read block ...passed 00:08:46.838 Test: blockdev write zeroes read no split ...passed 00:08:46.838 Test: blockdev write zeroes read split ...passed 00:08:46.838 Test: blockdev write zeroes read split partial ...passed 00:08:46.838 Test: blockdev reset ...[2024-07-24 18:51:24.400999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:08:46.838 [2024-07-24 18:51:24.401106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x225a580 (9): Bad file descriptor 00:08:46.838 [2024-07-24 18:51:24.415284] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:46.838 passed 00:08:46.838 Test: blockdev write read 8 blocks ...passed 00:08:46.838 Test: blockdev write read size > 128k ...passed 00:08:46.838 Test: blockdev write read invalid size ...passed 00:08:47.161 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:47.161 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:47.161 Test: blockdev write read max offset ...passed 00:08:47.161 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:47.161 Test: blockdev writev readv 8 blocks ...passed 00:08:47.161 Test: blockdev writev readv 30 x 1block ...passed 00:08:47.161 Test: blockdev writev readv block ...passed 00:08:47.161 Test: blockdev writev readv size > 128k ...passed 00:08:47.161 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:47.161 Test: blockdev comparev and writev ...[2024-07-24 18:51:24.629360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:47.161 [2024-07-24 18:51:24.629425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:08:47.161 [2024-07-24 18:51:24.629451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:47.161 [2024-07-24 18:51:24.629468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:08:47.161 [2024-07-24 18:51:24.629867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:47.161 [2024-07-24 18:51:24.629892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:08:47.161 [2024-07-24 18:51:24.629925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:47.161 [2024-07-24 18:51:24.629941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:08:47.161 [2024-07-24 18:51:24.630308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:47.161 [2024-07-24 18:51:24.630334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:08:47.161 [2024-07-24 18:51:24.630356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:47.161 [2024-07-24 18:51:24.630372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:08:47.161 [2024-07-24 18:51:24.630757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:47.161 [2024-07-24 18:51:24.630781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:08:47.161 [2024-07-24 18:51:24.630803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:47.161 [2024-07-24 18:51:24.630819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:08:47.161 passed 00:08:47.161 Test: blockdev nvme passthru rw ...passed 00:08:47.161 Test: blockdev nvme passthru vendor specific ...[2024-07-24 18:51:24.714430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:47.161 [2024-07-24 18:51:24.714467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:08:47.161 [2024-07-24 18:51:24.714661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:47.161 [2024-07-24 18:51:24.714683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:08:47.161 [2024-07-24 18:51:24.714875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:47.161 [2024-07-24 18:51:24.714897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:08:47.161 [2024-07-24 18:51:24.715086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:47.161 [2024-07-24 18:51:24.715118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:08:47.161 passed 00:08:47.161 Test: blockdev nvme admin passthru ...passed 00:08:47.419 Test: blockdev copy ...passed 00:08:47.419 00:08:47.419 Run Summary: Type Total Ran Passed Failed Inactive 00:08:47.419 suites 1 1 n/a 0 0 00:08:47.419 tests 23 23 23 0 0 00:08:47.419 asserts 152 152 152 0 n/a 00:08:47.419 00:08:47.419 Elapsed time = 1.153 seconds 00:08:47.419 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:47.419 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.419 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:47.676 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.676 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:08:47.676 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:08:47.676 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:47.676 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:08:47.676 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:47.676 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:08:47.676 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:47.676 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:47.676 rmmod nvme_tcp 00:08:47.676 rmmod nvme_fabrics 00:08:47.676 rmmod nvme_keyring 00:08:47.676 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:47.676 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:08:47.676 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:08:47.676 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3084486 ']' 00:08:47.676 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3084486 00:08:47.676 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3084486 ']' 00:08:47.676 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3084486 00:08:47.676 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:08:47.676 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:47.676 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3084486 00:08:47.676 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:08:47.676 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:08:47.677 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3084486' 00:08:47.677 killing process with pid 3084486 00:08:47.677 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3084486 00:08:47.677 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3084486 00:08:47.935 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:47.935 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:47.935 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:47.935 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:47.935 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:47.935 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.935 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.935 18:51:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:50.466 00:08:50.466 real 0m6.459s 00:08:50.466 user 0m10.149s 00:08:50.466 sys 0m2.134s 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:50.466 ************************************ 00:08:50.466 END TEST nvmf_bdevio 00:08:50.466 ************************************ 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:50.466 00:08:50.466 real 3m55.886s 00:08:50.466 user 10m8.779s 00:08:50.466 sys 1m8.452s 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:50.466 ************************************ 00:08:50.466 END TEST nvmf_target_core 00:08:50.466 ************************************ 00:08:50.466 18:51:27 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:50.466 18:51:27 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:50.466 18:51:27 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.466 18:51:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:50.466 ************************************ 00:08:50.466 START TEST nvmf_target_extra 00:08:50.466 ************************************ 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:50.466 * Looking for test storage... 00:08:50.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:08:50.466 ************************************ 00:08:50.466 START TEST nvmf_example 00:08:50.466 ************************************ 00:08:50.466 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:50.466 * Looking for test storage... 00:08:50.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:50.467 18:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:52.384 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:52.384 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:52.384 Found net devices under 0000:09:00.0: cvl_0_0 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:52.384 Found net devices under 0000:09:00.1: cvl_0_1 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.384 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:52.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:08:52.385 00:08:52.385 --- 10.0.0.2 ping statistics --- 00:08:52.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.385 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:52.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:08:52.385 00:08:52.385 --- 10.0.0.1 ping statistics --- 00:08:52.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.385 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3086761 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3086761 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 3086761 ']' 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:52.385 18:51:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:52.385 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.318 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:53.318 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:08:53.318 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:53.318 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:53.318 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:53.318 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:53.318 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.318 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:53.318 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.318 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:53.318 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.318 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:53.318 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.318 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:53.318 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:53.318 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.318 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:53.575 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.575 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:53.575 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:53.575 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.575 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:53.575 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.575 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.575 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.575 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:53.575 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.576 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:53.576 18:51:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:53.576 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.535 Initializing NVMe Controllers 00:09:03.535 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:03.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:03.535 Initialization complete. Launching workers. 00:09:03.535 ======================================================== 00:09:03.535 Latency(us) 00:09:03.535 Device Information : IOPS MiB/s Average min max 00:09:03.535 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13658.01 53.35 4685.48 660.02 15264.10 00:09:03.535 ======================================================== 00:09:03.535 Total : 13658.01 53.35 4685.48 660.02 15264.10 00:09:03.535 00:09:03.535 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:03.535 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:03.535 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:03.535 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:09:03.536 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:03.536 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:09:03.536 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:03.536 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:03.536 rmmod nvme_tcp 00:09:03.536 rmmod nvme_fabrics 00:09:03.536 rmmod nvme_keyring 00:09:03.793 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:03.793 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:09:03.793 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:09:03.793 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3086761 ']' 00:09:03.793 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3086761 00:09:03.794 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 3086761 ']' 00:09:03.794 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 3086761 00:09:03.794 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:09:03.794 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:03.794 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3086761 00:09:03.794 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:09:03.794 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:09:03.794 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3086761' 00:09:03.794 killing process with pid 3086761 00:09:03.794 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 3086761 00:09:03.794 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 3086761 00:09:04.052 nvmf threads initialize successfully 00:09:04.052 bdev subsystem init successfully 00:09:04.052 created a nvmf target service 00:09:04.052 create targets's poll groups done 00:09:04.052 all subsystems of target started 00:09:04.052 nvmf target is running 00:09:04.052 all subsystems of target stopped 00:09:04.052 destroy targets's poll groups done 00:09:04.052 destroyed the nvmf target service 00:09:04.052 bdev subsystem finish successfully 00:09:04.052 nvmf threads destroy successfully 00:09:04.052 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:04.052 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:04.052 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:04.052 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:04.052 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:04.052 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.052 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.052 18:51:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.955 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:05.955 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:05.955 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:05.955 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:05.955 00:09:05.955 real 0m15.850s 00:09:05.955 user 0m40.964s 00:09:05.955 sys 0m4.824s 00:09:05.955 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.955 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:05.955 ************************************ 00:09:05.955 END TEST nvmf_example 00:09:05.955 ************************************ 00:09:05.955 18:51:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:05.955 18:51:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:05.955 18:51:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.955 18:51:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:05.955 ************************************ 00:09:05.955 START TEST nvmf_filesystem 00:09:05.955 ************************************ 00:09:05.955 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:06.216 * Looking for test storage... 00:09:06.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:06.216 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:06.217 #define SPDK_CONFIG_H 00:09:06.217 #define SPDK_CONFIG_APPS 1 00:09:06.217 #define SPDK_CONFIG_ARCH native 00:09:06.217 #undef SPDK_CONFIG_ASAN 00:09:06.217 #undef SPDK_CONFIG_AVAHI 00:09:06.217 #undef SPDK_CONFIG_CET 00:09:06.217 #define SPDK_CONFIG_COVERAGE 1 00:09:06.217 #define SPDK_CONFIG_CROSS_PREFIX 00:09:06.217 #undef SPDK_CONFIG_CRYPTO 00:09:06.217 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:06.217 #undef SPDK_CONFIG_CUSTOMOCF 00:09:06.217 #undef SPDK_CONFIG_DAOS 00:09:06.217 #define SPDK_CONFIG_DAOS_DIR 00:09:06.217 #define SPDK_CONFIG_DEBUG 1 00:09:06.217 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:06.217 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:06.217 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:06.217 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:06.217 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:06.217 #undef SPDK_CONFIG_DPDK_UADK 00:09:06.217 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:06.217 #define SPDK_CONFIG_EXAMPLES 1 00:09:06.217 #undef SPDK_CONFIG_FC 00:09:06.217 #define SPDK_CONFIG_FC_PATH 00:09:06.217 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:06.217 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:06.217 #undef SPDK_CONFIG_FUSE 00:09:06.217 #undef SPDK_CONFIG_FUZZER 00:09:06.217 #define SPDK_CONFIG_FUZZER_LIB 00:09:06.217 #undef SPDK_CONFIG_GOLANG 00:09:06.217 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:06.217 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:06.217 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:06.217 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:06.217 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:06.217 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:06.217 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:06.217 #define SPDK_CONFIG_IDXD 1 00:09:06.217 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:06.217 #undef SPDK_CONFIG_IPSEC_MB 00:09:06.217 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:06.217 #define SPDK_CONFIG_ISAL 1 00:09:06.217 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:06.217 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:06.217 #define SPDK_CONFIG_LIBDIR 00:09:06.217 #undef SPDK_CONFIG_LTO 00:09:06.217 #define SPDK_CONFIG_MAX_LCORES 128 00:09:06.217 #define SPDK_CONFIG_NVME_CUSE 1 00:09:06.217 #undef SPDK_CONFIG_OCF 00:09:06.217 #define SPDK_CONFIG_OCF_PATH 00:09:06.217 #define SPDK_CONFIG_OPENSSL_PATH 00:09:06.217 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:06.217 #define SPDK_CONFIG_PGO_DIR 00:09:06.217 #undef SPDK_CONFIG_PGO_USE 00:09:06.217 #define SPDK_CONFIG_PREFIX /usr/local 00:09:06.217 #undef SPDK_CONFIG_RAID5F 00:09:06.217 #undef SPDK_CONFIG_RBD 00:09:06.217 #define SPDK_CONFIG_RDMA 1 00:09:06.217 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:06.217 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:06.217 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:06.217 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:06.217 #define SPDK_CONFIG_SHARED 1 00:09:06.217 #undef SPDK_CONFIG_SMA 00:09:06.217 #define SPDK_CONFIG_TESTS 1 00:09:06.217 #undef SPDK_CONFIG_TSAN 00:09:06.217 #define SPDK_CONFIG_UBLK 1 00:09:06.217 #define SPDK_CONFIG_UBSAN 1 00:09:06.217 #undef SPDK_CONFIG_UNIT_TESTS 00:09:06.217 #undef SPDK_CONFIG_URING 00:09:06.217 #define SPDK_CONFIG_URING_PATH 00:09:06.217 #undef SPDK_CONFIG_URING_ZNS 00:09:06.217 #undef SPDK_CONFIG_USDT 00:09:06.217 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:06.217 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:06.217 #define SPDK_CONFIG_VFIO_USER 1 00:09:06.217 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:06.217 #define SPDK_CONFIG_VHOST 1 00:09:06.217 #define SPDK_CONFIG_VIRTIO 1 00:09:06.217 #undef SPDK_CONFIG_VTUNE 00:09:06.217 #define SPDK_CONFIG_VTUNE_DIR 00:09:06.217 #define SPDK_CONFIG_WERROR 1 00:09:06.217 #define SPDK_CONFIG_WPDK_DIR 00:09:06.217 #undef SPDK_CONFIG_XNVME 00:09:06.217 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.217 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:06.218 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:06.219 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 3088460 ]] 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 3088460 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.WfbZM7 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.WfbZM7/tests/target /tmp/spdk.WfbZM7 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=952066048 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4332363776 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=51272155136 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61994708992 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=10722553856 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30986096640 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997352448 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=11255808 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12376530944 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12398944256 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22413312 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30996271104 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=1085440 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6199463936 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6199468032 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:09:06.220 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:09:06.221 * Looking for test storage... 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=51272155136 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=12937146368 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.221 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:06.222 18:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:08.751 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:08.751 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:08.751 Found net devices under 0000:09:00.0: cvl_0_0 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:08.751 Found net devices under 0000:09:00.1: cvl_0_1 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:08.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:09:08.751 00:09:08.751 --- 10.0.0.2 ping statistics --- 00:09:08.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.751 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:09:08.751 00:09:08.751 --- 10.0.0.1 ping statistics --- 00:09:08.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.751 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:08.751 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.752 ************************************ 00:09:08.752 START TEST nvmf_filesystem_no_in_capsule 00:09:08.752 ************************************ 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3090087 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3090087 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3090087 ']' 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.752 18:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.752 [2024-07-24 18:51:46.020691] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:09:08.752 [2024-07-24 18:51:46.020775] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.752 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.752 [2024-07-24 18:51:46.090143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.752 [2024-07-24 18:51:46.214756] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.752 [2024-07-24 18:51:46.214818] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.752 [2024-07-24 18:51:46.214835] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.752 [2024-07-24 18:51:46.214848] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.752 [2024-07-24 18:51:46.214860] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.752 [2024-07-24 18:51:46.214950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.752 [2024-07-24 18:51:46.215013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.752 [2024-07-24 18:51:46.215067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.752 [2024-07-24 18:51:46.215064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.684 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.684 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:09:09.684 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:09.684 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:09.684 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.684 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.684 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:09.684 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:09.684 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.684 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.684 [2024-07-24 18:51:46.990953] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.684 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.684 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:09.684 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.684 18:51:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.684 Malloc1 00:09:09.684 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.684 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:09.684 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.684 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.684 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.684 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:09.684 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.685 [2024-07-24 18:51:47.182354] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:09.685 { 00:09:09.685 "name": "Malloc1", 00:09:09.685 "aliases": [ 00:09:09.685 "8f0c5ed2-68ed-4f16-b681-ae5983c5f4fe" 00:09:09.685 ], 00:09:09.685 "product_name": "Malloc disk", 00:09:09.685 "block_size": 512, 00:09:09.685 "num_blocks": 1048576, 00:09:09.685 "uuid": "8f0c5ed2-68ed-4f16-b681-ae5983c5f4fe", 00:09:09.685 "assigned_rate_limits": { 00:09:09.685 "rw_ios_per_sec": 0, 00:09:09.685 "rw_mbytes_per_sec": 0, 00:09:09.685 "r_mbytes_per_sec": 0, 00:09:09.685 "w_mbytes_per_sec": 0 00:09:09.685 }, 00:09:09.685 "claimed": true, 00:09:09.685 "claim_type": "exclusive_write", 00:09:09.685 "zoned": false, 00:09:09.685 "supported_io_types": { 00:09:09.685 "read": true, 00:09:09.685 "write": true, 00:09:09.685 "unmap": true, 00:09:09.685 "flush": true, 00:09:09.685 "reset": true, 00:09:09.685 "nvme_admin": false, 00:09:09.685 "nvme_io": false, 00:09:09.685 "nvme_io_md": false, 00:09:09.685 "write_zeroes": true, 00:09:09.685 "zcopy": true, 00:09:09.685 "get_zone_info": false, 00:09:09.685 "zone_management": false, 00:09:09.685 "zone_append": false, 00:09:09.685 "compare": false, 00:09:09.685 "compare_and_write": false, 00:09:09.685 "abort": true, 00:09:09.685 "seek_hole": false, 00:09:09.685 "seek_data": false, 00:09:09.685 "copy": true, 00:09:09.685 "nvme_iov_md": false 00:09:09.685 }, 00:09:09.685 "memory_domains": [ 00:09:09.685 { 00:09:09.685 "dma_device_id": "system", 00:09:09.685 "dma_device_type": 1 00:09:09.685 }, 00:09:09.685 { 00:09:09.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.685 "dma_device_type": 2 00:09:09.685 } 00:09:09.685 ], 00:09:09.685 "driver_specific": {} 00:09:09.685 } 00:09:09.685 ]' 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:09.685 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:10.617 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:10.617 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:10.617 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:10.617 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:10.617 18:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:12.512 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:12.512 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:12.512 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:12.512 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:12.512 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:12.512 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:12.512 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:12.512 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:12.512 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:12.512 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:12.512 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:12.512 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:12.513 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:12.513 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:12.513 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:12.513 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:12.513 18:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:12.770 18:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:13.334 18:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:14.267 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:14.267 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:14.267 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:14.267 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.267 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:14.267 ************************************ 00:09:14.267 START TEST filesystem_ext4 00:09:14.267 ************************************ 00:09:14.267 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:14.267 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:14.267 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:14.267 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:14.267 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:09:14.267 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:14.267 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:09:14.267 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:09:14.267 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:09:14.267 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:09:14.267 18:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:14.267 mke2fs 1.46.5 (30-Dec-2021) 00:09:14.267 Discarding device blocks: 0/522240 done 00:09:14.267 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:14.267 Filesystem UUID: fa1a6d38-0b93-4868-aa18-2a3099458153 00:09:14.267 Superblock backups stored on blocks: 00:09:14.267 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:14.267 00:09:14.267 Allocating group tables: 0/64 done 00:09:14.267 Writing inode tables: 0/64 done 00:09:14.832 Creating journal (8192 blocks): done 00:09:15.396 Writing superblocks and filesystem accounting information: 0/64 done 00:09:15.396 00:09:15.396 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:09:15.396 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:15.396 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:15.396 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:15.396 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:15.396 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:15.396 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:15.396 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:15.396 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3090087 00:09:15.396 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:15.396 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:15.397 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:15.397 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:15.397 00:09:15.397 real 0m1.288s 00:09:15.397 user 0m0.016s 00:09:15.397 sys 0m0.060s 00:09:15.397 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.397 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:15.397 ************************************ 00:09:15.397 END TEST filesystem_ext4 00:09:15.397 ************************************ 00:09:15.397 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:15.397 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:15.397 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.397 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.397 ************************************ 00:09:15.397 START TEST filesystem_btrfs 00:09:15.397 ************************************ 00:09:15.397 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:15.397 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:15.397 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:15.397 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:15.397 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:09:15.397 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:15.397 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:09:15.397 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:09:15.397 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:09:15.397 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:09:15.397 18:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:15.655 btrfs-progs v6.6.2 00:09:15.655 See https://btrfs.readthedocs.io for more information. 00:09:15.655 00:09:15.655 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:15.655 NOTE: several default settings have changed in version 5.15, please make sure 00:09:15.655 this does not affect your deployments: 00:09:15.655 - DUP for metadata (-m dup) 00:09:15.655 - enabled no-holes (-O no-holes) 00:09:15.655 - enabled free-space-tree (-R free-space-tree) 00:09:15.655 00:09:15.655 Label: (null) 00:09:15.655 UUID: 97f03d73-6eb0-43cb-8d1c-379bf0e8c3c2 00:09:15.655 Node size: 16384 00:09:15.655 Sector size: 4096 00:09:15.655 Filesystem size: 510.00MiB 00:09:15.655 Block group profiles: 00:09:15.655 Data: single 8.00MiB 00:09:15.655 Metadata: DUP 32.00MiB 00:09:15.655 System: DUP 8.00MiB 00:09:15.655 SSD detected: yes 00:09:15.655 Zoned device: no 00:09:15.655 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:15.655 Runtime features: free-space-tree 00:09:15.655 Checksum: crc32c 00:09:15.655 Number of devices: 1 00:09:15.655 Devices: 00:09:15.655 ID SIZE PATH 00:09:15.655 1 510.00MiB /dev/nvme0n1p1 00:09:15.655 00:09:15.655 18:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:09:15.655 18:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:16.584 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:16.584 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:16.584 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:16.584 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:16.584 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:16.584 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:16.584 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3090087 00:09:16.584 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:16.584 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:16.584 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:16.584 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:16.584 00:09:16.584 real 0m1.111s 00:09:16.584 user 0m0.025s 00:09:16.584 sys 0m0.107s 00:09:16.584 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.584 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:16.584 ************************************ 00:09:16.584 END TEST filesystem_btrfs 00:09:16.584 ************************************ 00:09:16.584 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:16.584 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:16.584 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.584 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:16.584 ************************************ 00:09:16.584 START TEST filesystem_xfs 00:09:16.584 ************************************ 00:09:16.584 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:09:16.584 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:16.584 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:16.584 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:16.584 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:09:16.585 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:16.585 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:09:16.585 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:09:16.585 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:09:16.585 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:09:16.585 18:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:16.842 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:16.842 = sectsz=512 attr=2, projid32bit=1 00:09:16.842 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:16.842 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:16.842 data = bsize=4096 blocks=130560, imaxpct=25 00:09:16.842 = sunit=0 swidth=0 blks 00:09:16.842 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:16.842 log =internal log bsize=4096 blocks=16384, version=2 00:09:16.842 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:16.842 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:17.781 Discarding blocks...Done. 00:09:17.781 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:09:17.781 18:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3090087 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:20.375 00:09:20.375 real 0m3.357s 00:09:20.375 user 0m0.021s 00:09:20.375 sys 0m0.056s 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:20.375 ************************************ 00:09:20.375 END TEST filesystem_xfs 00:09:20.375 ************************************ 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3090087 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3090087 ']' 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3090087 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3090087 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3090087' 00:09:20.375 killing process with pid 3090087 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 3090087 00:09:20.375 18:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 3090087 00:09:20.943 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:20.943 00:09:20.943 real 0m12.509s 00:09:20.943 user 0m48.076s 00:09:20.943 sys 0m1.770s 00:09:20.943 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.943 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:20.943 ************************************ 00:09:20.943 END TEST nvmf_filesystem_no_in_capsule 00:09:20.943 ************************************ 00:09:20.943 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:20.943 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:20.943 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.943 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.943 ************************************ 00:09:20.943 START TEST nvmf_filesystem_in_capsule 00:09:20.943 ************************************ 00:09:20.943 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:09:20.943 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:20.943 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:20.943 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:20.943 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:20.943 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:20.943 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3091771 00:09:20.943 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:20.943 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3091771 00:09:20.943 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3091771 ']' 00:09:20.943 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.943 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:20.943 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.943 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:20.943 18:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.201 [2024-07-24 18:51:58.581611] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:09:21.201 [2024-07-24 18:51:58.581697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.201 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.201 [2024-07-24 18:51:58.652300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:21.201 [2024-07-24 18:51:58.774925] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.201 [2024-07-24 18:51:58.774986] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.201 [2024-07-24 18:51:58.775003] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.201 [2024-07-24 18:51:58.775016] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.201 [2024-07-24 18:51:58.775027] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.201 [2024-07-24 18:51:58.775129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.201 [2024-07-24 18:51:58.775199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.201 [2024-07-24 18:51:58.775244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.201 [2024-07-24 18:51:58.775246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.135 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.135 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:09:22.135 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:22.135 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:22.135 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.135 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.135 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:22.135 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:22.135 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.135 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.135 [2024-07-24 18:51:59.606223] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.135 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.135 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:22.135 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.135 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.393 Malloc1 00:09:22.393 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.393 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:22.393 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.393 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.393 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.394 [2024-07-24 18:51:59.788782] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:22.394 { 00:09:22.394 "name": "Malloc1", 00:09:22.394 "aliases": [ 00:09:22.394 "6e721aae-9790-4a6b-ae34-bf4e673e172f" 00:09:22.394 ], 00:09:22.394 "product_name": "Malloc disk", 00:09:22.394 "block_size": 512, 00:09:22.394 "num_blocks": 1048576, 00:09:22.394 "uuid": "6e721aae-9790-4a6b-ae34-bf4e673e172f", 00:09:22.394 "assigned_rate_limits": { 00:09:22.394 "rw_ios_per_sec": 0, 00:09:22.394 "rw_mbytes_per_sec": 0, 00:09:22.394 "r_mbytes_per_sec": 0, 00:09:22.394 "w_mbytes_per_sec": 0 00:09:22.394 }, 00:09:22.394 "claimed": true, 00:09:22.394 "claim_type": "exclusive_write", 00:09:22.394 "zoned": false, 00:09:22.394 "supported_io_types": { 00:09:22.394 "read": true, 00:09:22.394 "write": true, 00:09:22.394 "unmap": true, 00:09:22.394 "flush": true, 00:09:22.394 "reset": true, 00:09:22.394 "nvme_admin": false, 00:09:22.394 "nvme_io": false, 00:09:22.394 "nvme_io_md": false, 00:09:22.394 "write_zeroes": true, 00:09:22.394 "zcopy": true, 00:09:22.394 "get_zone_info": false, 00:09:22.394 "zone_management": false, 00:09:22.394 "zone_append": false, 00:09:22.394 "compare": false, 00:09:22.394 "compare_and_write": false, 00:09:22.394 "abort": true, 00:09:22.394 "seek_hole": false, 00:09:22.394 "seek_data": false, 00:09:22.394 "copy": true, 00:09:22.394 "nvme_iov_md": false 00:09:22.394 }, 00:09:22.394 "memory_domains": [ 00:09:22.394 { 00:09:22.394 "dma_device_id": "system", 00:09:22.394 "dma_device_type": 1 00:09:22.394 }, 00:09:22.394 { 00:09:22.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.394 "dma_device_type": 2 00:09:22.394 } 00:09:22.394 ], 00:09:22.394 "driver_specific": {} 00:09:22.394 } 00:09:22.394 ]' 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:22.394 18:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:23.326 18:52:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:23.326 18:52:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:23.326 18:52:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:23.326 18:52:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:23.326 18:52:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:25.220 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:25.220 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:25.220 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:25.220 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:25.220 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:25.220 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:25.220 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:25.220 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:25.220 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:25.220 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:25.220 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:25.220 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:25.220 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:25.220 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:25.220 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:25.220 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:25.220 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:25.477 18:52:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:26.040 18:52:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:26.971 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:26.971 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:26.971 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:26.971 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.971 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:26.971 ************************************ 00:09:26.971 START TEST filesystem_in_capsule_ext4 00:09:26.971 ************************************ 00:09:26.971 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:26.971 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:26.971 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:26.971 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:26.971 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:09:26.971 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:26.971 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:09:26.971 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:09:26.971 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:09:26.971 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:09:26.971 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:26.971 mke2fs 1.46.5 (30-Dec-2021) 00:09:26.971 Discarding device blocks: 0/522240 done 00:09:26.971 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:26.971 Filesystem UUID: de7c66d5-2b7d-44a8-aaab-b8012be7d74d 00:09:26.971 Superblock backups stored on blocks: 00:09:26.971 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:26.971 00:09:26.971 Allocating group tables: 0/64 done 00:09:26.971 Writing inode tables: 0/64 done 00:09:27.249 Creating journal (8192 blocks): done 00:09:27.249 Writing superblocks and filesystem accounting information: 0/64 done 00:09:27.249 00:09:27.249 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:09:27.249 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:27.249 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3091771 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:27.508 00:09:27.508 real 0m0.470s 00:09:27.508 user 0m0.017s 00:09:27.508 sys 0m0.060s 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:27.508 ************************************ 00:09:27.508 END TEST filesystem_in_capsule_ext4 00:09:27.508 ************************************ 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:27.508 ************************************ 00:09:27.508 START TEST filesystem_in_capsule_btrfs 00:09:27.508 ************************************ 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:09:27.508 18:52:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:27.766 btrfs-progs v6.6.2 00:09:27.766 See https://btrfs.readthedocs.io for more information. 00:09:27.766 00:09:27.766 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:27.766 NOTE: several default settings have changed in version 5.15, please make sure 00:09:27.766 this does not affect your deployments: 00:09:27.766 - DUP for metadata (-m dup) 00:09:27.766 - enabled no-holes (-O no-holes) 00:09:27.766 - enabled free-space-tree (-R free-space-tree) 00:09:27.766 00:09:27.766 Label: (null) 00:09:27.766 UUID: 89c26699-d726-493d-a202-9e534963d9c8 00:09:27.766 Node size: 16384 00:09:27.766 Sector size: 4096 00:09:27.766 Filesystem size: 510.00MiB 00:09:27.766 Block group profiles: 00:09:27.766 Data: single 8.00MiB 00:09:27.766 Metadata: DUP 32.00MiB 00:09:27.766 System: DUP 8.00MiB 00:09:27.766 SSD detected: yes 00:09:27.766 Zoned device: no 00:09:27.766 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:27.766 Runtime features: free-space-tree 00:09:27.766 Checksum: crc32c 00:09:27.766 Number of devices: 1 00:09:27.766 Devices: 00:09:27.766 ID SIZE PATH 00:09:27.766 1 510.00MiB /dev/nvme0n1p1 00:09:27.766 00:09:27.766 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:09:27.766 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3091771 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:28.330 00:09:28.330 real 0m0.797s 00:09:28.330 user 0m0.018s 00:09:28.330 sys 0m0.118s 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:28.330 ************************************ 00:09:28.330 END TEST filesystem_in_capsule_btrfs 00:09:28.330 ************************************ 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:28.330 ************************************ 00:09:28.330 START TEST filesystem_in_capsule_xfs 00:09:28.330 ************************************ 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:09:28.330 18:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:28.330 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:28.330 = sectsz=512 attr=2, projid32bit=1 00:09:28.330 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:28.330 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:28.330 data = bsize=4096 blocks=130560, imaxpct=25 00:09:28.330 = sunit=0 swidth=0 blks 00:09:28.330 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:28.330 log =internal log bsize=4096 blocks=16384, version=2 00:09:28.330 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:28.330 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:29.260 Discarding blocks...Done. 00:09:29.260 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:09:29.260 18:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3091771 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:31.154 00:09:31.154 real 0m2.627s 00:09:31.154 user 0m0.020s 00:09:31.154 sys 0m0.052s 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:31.154 ************************************ 00:09:31.154 END TEST filesystem_in_capsule_xfs 00:09:31.154 ************************************ 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:31.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3091771 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3091771 ']' 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3091771 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3091771 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3091771' 00:09:31.154 killing process with pid 3091771 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 3091771 00:09:31.154 18:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 3091771 00:09:31.719 18:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:31.719 00:09:31.719 real 0m10.596s 00:09:31.719 user 0m40.567s 00:09:31.719 sys 0m1.740s 00:09:31.719 18:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.719 18:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:31.719 ************************************ 00:09:31.719 END TEST nvmf_filesystem_in_capsule 00:09:31.719 ************************************ 00:09:31.719 18:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:31.719 18:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:31.719 18:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:09:31.719 18:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:31.719 18:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:09:31.719 18:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:31.719 18:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:31.719 rmmod nvme_tcp 00:09:31.719 rmmod nvme_fabrics 00:09:31.719 rmmod nvme_keyring 00:09:31.719 18:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:31.719 18:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:09:31.719 18:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:09:31.719 18:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:31.719 18:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:31.719 18:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:31.719 18:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:31.719 18:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:31.719 18:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:31.719 18:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.719 18:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.719 18:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.247 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:34.247 00:09:34.247 real 0m27.706s 00:09:34.247 user 1m29.529s 00:09:34.247 sys 0m5.224s 00:09:34.247 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.247 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:34.247 ************************************ 00:09:34.247 END TEST nvmf_filesystem 00:09:34.247 ************************************ 00:09:34.247 18:52:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:34.247 18:52:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:34.247 18:52:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.247 18:52:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:34.247 ************************************ 00:09:34.247 START TEST nvmf_target_discovery 00:09:34.247 ************************************ 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:34.248 * Looking for test storage... 00:09:34.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:09:34.248 18:52:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:36.148 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:36.149 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:36.149 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:36.149 Found net devices under 0000:09:00.0: cvl_0_0 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:36.149 Found net devices under 0000:09:00.1: cvl_0_1 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:36.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:09:36.149 00:09:36.149 --- 10.0.0.2 ping statistics --- 00:09:36.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.149 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:36.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:09:36.149 00:09:36.149 --- 10.0.0.1 ping statistics --- 00:09:36.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.149 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3095111 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3095111 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 3095111 ']' 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:36.149 18:52:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:36.149 [2024-07-24 18:52:13.482775] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:09:36.149 [2024-07-24 18:52:13.482863] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.149 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.149 [2024-07-24 18:52:13.550121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:36.150 [2024-07-24 18:52:13.662073] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.150 [2024-07-24 18:52:13.662165] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.150 [2024-07-24 18:52:13.662181] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.150 [2024-07-24 18:52:13.662193] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.150 [2024-07-24 18:52:13.662204] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.150 [2024-07-24 18:52:13.662270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.150 [2024-07-24 18:52:13.662334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.150 [2024-07-24 18:52:13.662398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.150 [2024-07-24 18:52:13.662401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.083 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:37.083 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:09:37.083 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:37.083 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:37.083 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.083 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.084 [2024-07-24 18:52:14.458773] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.084 Null1 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.084 [2024-07-24 18:52:14.499054] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.084 Null2 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.084 Null3 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.084 Null4 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.084 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:09:37.342 00:09:37.342 Discovery Log Number of Records 6, Generation counter 6 00:09:37.342 =====Discovery Log Entry 0====== 00:09:37.342 trtype: tcp 00:09:37.342 adrfam: ipv4 00:09:37.342 subtype: current discovery subsystem 00:09:37.342 treq: not required 00:09:37.342 portid: 0 00:09:37.342 trsvcid: 4420 00:09:37.342 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:37.342 traddr: 10.0.0.2 00:09:37.342 eflags: explicit discovery connections, duplicate discovery information 00:09:37.342 sectype: none 00:09:37.342 =====Discovery Log Entry 1====== 00:09:37.342 trtype: tcp 00:09:37.342 adrfam: ipv4 00:09:37.342 subtype: nvme subsystem 00:09:37.342 treq: not required 00:09:37.342 portid: 0 00:09:37.342 trsvcid: 4420 00:09:37.342 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:37.342 traddr: 10.0.0.2 00:09:37.342 eflags: none 00:09:37.342 sectype: none 00:09:37.342 =====Discovery Log Entry 2====== 00:09:37.342 trtype: tcp 00:09:37.342 adrfam: ipv4 00:09:37.342 subtype: nvme subsystem 00:09:37.342 treq: not required 00:09:37.342 portid: 0 00:09:37.342 trsvcid: 4420 00:09:37.342 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:37.342 traddr: 10.0.0.2 00:09:37.342 eflags: none 00:09:37.342 sectype: none 00:09:37.342 =====Discovery Log Entry 3====== 00:09:37.342 trtype: tcp 00:09:37.342 adrfam: ipv4 00:09:37.342 subtype: nvme subsystem 00:09:37.342 treq: not required 00:09:37.342 portid: 0 00:09:37.342 trsvcid: 4420 00:09:37.342 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:37.342 traddr: 10.0.0.2 00:09:37.342 eflags: none 00:09:37.342 sectype: none 00:09:37.342 =====Discovery Log Entry 4====== 00:09:37.342 trtype: tcp 00:09:37.342 adrfam: ipv4 00:09:37.342 subtype: nvme subsystem 00:09:37.342 treq: not required 00:09:37.342 portid: 0 00:09:37.342 trsvcid: 4420 00:09:37.342 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:37.342 traddr: 10.0.0.2 00:09:37.342 eflags: none 00:09:37.342 sectype: none 00:09:37.342 =====Discovery Log Entry 5====== 00:09:37.342 trtype: tcp 00:09:37.342 adrfam: ipv4 00:09:37.342 subtype: discovery subsystem referral 00:09:37.342 treq: not required 00:09:37.342 portid: 0 00:09:37.342 trsvcid: 4430 00:09:37.342 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:37.342 traddr: 10.0.0.2 00:09:37.342 eflags: none 00:09:37.342 sectype: none 00:09:37.342 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:37.342 Perform nvmf subsystem discovery via RPC 00:09:37.342 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:37.342 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.342 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.342 [ 00:09:37.342 { 00:09:37.342 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:37.342 "subtype": "Discovery", 00:09:37.342 "listen_addresses": [ 00:09:37.342 { 00:09:37.342 "trtype": "TCP", 00:09:37.342 "adrfam": "IPv4", 00:09:37.342 "traddr": "10.0.0.2", 00:09:37.342 "trsvcid": "4420" 00:09:37.342 } 00:09:37.342 ], 00:09:37.342 "allow_any_host": true, 00:09:37.342 "hosts": [] 00:09:37.342 }, 00:09:37.342 { 00:09:37.342 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:37.342 "subtype": "NVMe", 00:09:37.342 "listen_addresses": [ 00:09:37.342 { 00:09:37.342 "trtype": "TCP", 00:09:37.342 "adrfam": "IPv4", 00:09:37.342 "traddr": "10.0.0.2", 00:09:37.342 "trsvcid": "4420" 00:09:37.342 } 00:09:37.342 ], 00:09:37.342 "allow_any_host": true, 00:09:37.342 "hosts": [], 00:09:37.342 "serial_number": "SPDK00000000000001", 00:09:37.342 "model_number": "SPDK bdev Controller", 00:09:37.342 "max_namespaces": 32, 00:09:37.342 "min_cntlid": 1, 00:09:37.342 "max_cntlid": 65519, 00:09:37.342 "namespaces": [ 00:09:37.342 { 00:09:37.342 "nsid": 1, 00:09:37.343 "bdev_name": "Null1", 00:09:37.343 "name": "Null1", 00:09:37.343 "nguid": "02A91DAEE8A34C099FDE6393AF465423", 00:09:37.343 "uuid": "02a91dae-e8a3-4c09-9fde-6393af465423" 00:09:37.343 } 00:09:37.343 ] 00:09:37.343 }, 00:09:37.343 { 00:09:37.343 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:37.343 "subtype": "NVMe", 00:09:37.343 "listen_addresses": [ 00:09:37.343 { 00:09:37.343 "trtype": "TCP", 00:09:37.343 "adrfam": "IPv4", 00:09:37.343 "traddr": "10.0.0.2", 00:09:37.343 "trsvcid": "4420" 00:09:37.343 } 00:09:37.343 ], 00:09:37.343 "allow_any_host": true, 00:09:37.343 "hosts": [], 00:09:37.343 "serial_number": "SPDK00000000000002", 00:09:37.343 "model_number": "SPDK bdev Controller", 00:09:37.343 "max_namespaces": 32, 00:09:37.343 "min_cntlid": 1, 00:09:37.343 "max_cntlid": 65519, 00:09:37.343 "namespaces": [ 00:09:37.343 { 00:09:37.343 "nsid": 1, 00:09:37.343 "bdev_name": "Null2", 00:09:37.343 "name": "Null2", 00:09:37.343 "nguid": "9CD9A8CD026B42BCAD22C6519A722ECB", 00:09:37.343 "uuid": "9cd9a8cd-026b-42bc-ad22-c6519a722ecb" 00:09:37.343 } 00:09:37.343 ] 00:09:37.343 }, 00:09:37.343 { 00:09:37.343 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:37.343 "subtype": "NVMe", 00:09:37.343 "listen_addresses": [ 00:09:37.343 { 00:09:37.343 "trtype": "TCP", 00:09:37.343 "adrfam": "IPv4", 00:09:37.343 "traddr": "10.0.0.2", 00:09:37.343 "trsvcid": "4420" 00:09:37.343 } 00:09:37.343 ], 00:09:37.343 "allow_any_host": true, 00:09:37.343 "hosts": [], 00:09:37.343 "serial_number": "SPDK00000000000003", 00:09:37.343 "model_number": "SPDK bdev Controller", 00:09:37.343 "max_namespaces": 32, 00:09:37.343 "min_cntlid": 1, 00:09:37.343 "max_cntlid": 65519, 00:09:37.343 "namespaces": [ 00:09:37.343 { 00:09:37.343 "nsid": 1, 00:09:37.343 "bdev_name": "Null3", 00:09:37.343 "name": "Null3", 00:09:37.343 "nguid": "DA0A23DF4A8A44078433F4F2F3ACB2B1", 00:09:37.343 "uuid": "da0a23df-4a8a-4407-8433-f4f2f3acb2b1" 00:09:37.343 } 00:09:37.343 ] 00:09:37.343 }, 00:09:37.343 { 00:09:37.343 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:37.343 "subtype": "NVMe", 00:09:37.343 "listen_addresses": [ 00:09:37.343 { 00:09:37.343 "trtype": "TCP", 00:09:37.343 "adrfam": "IPv4", 00:09:37.343 "traddr": "10.0.0.2", 00:09:37.343 "trsvcid": "4420" 00:09:37.343 } 00:09:37.343 ], 00:09:37.343 "allow_any_host": true, 00:09:37.343 "hosts": [], 00:09:37.343 "serial_number": "SPDK00000000000004", 00:09:37.343 "model_number": "SPDK bdev Controller", 00:09:37.343 "max_namespaces": 32, 00:09:37.343 "min_cntlid": 1, 00:09:37.343 "max_cntlid": 65519, 00:09:37.343 "namespaces": [ 00:09:37.343 { 00:09:37.343 "nsid": 1, 00:09:37.343 "bdev_name": "Null4", 00:09:37.343 "name": "Null4", 00:09:37.343 "nguid": "7C01C9005F1D42BEA7E8F86C773E0247", 00:09:37.343 "uuid": "7c01c900-5f1d-42be-a7e8-f86c773e0247" 00:09:37.343 } 00:09:37.343 ] 00:09:37.343 } 00:09:37.343 ] 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:37.343 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:37.343 rmmod nvme_tcp 00:09:37.343 rmmod nvme_fabrics 00:09:37.343 rmmod nvme_keyring 00:09:37.601 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:37.601 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:37.601 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:37.601 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3095111 ']' 00:09:37.601 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3095111 00:09:37.601 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 3095111 ']' 00:09:37.601 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 3095111 00:09:37.601 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:09:37.601 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:37.601 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3095111 00:09:37.601 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:37.601 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:37.601 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3095111' 00:09:37.601 killing process with pid 3095111 00:09:37.601 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 3095111 00:09:37.601 18:52:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 3095111 00:09:37.861 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:37.861 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:37.861 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:37.861 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:37.861 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:37.861 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.861 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.861 18:52:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.793 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:39.793 00:09:39.793 real 0m6.014s 00:09:39.793 user 0m7.176s 00:09:39.793 sys 0m1.846s 00:09:39.793 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.793 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.793 ************************************ 00:09:39.793 END TEST nvmf_target_discovery 00:09:39.793 ************************************ 00:09:39.793 18:52:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:39.793 18:52:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:39.793 18:52:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.793 18:52:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:39.793 ************************************ 00:09:39.793 START TEST nvmf_referrals 00:09:39.793 ************************************ 00:09:39.793 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:40.052 * Looking for test storage... 00:09:40.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:40.052 18:52:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.975 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:41.976 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:41.976 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:41.976 Found net devices under 0000:09:00.0: cvl_0_0 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:41.976 Found net devices under 0000:09:00.1: cvl_0_1 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:41.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:09:41.976 00:09:41.976 --- 10.0.0.2 ping statistics --- 00:09:41.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.976 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:41.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:09:41.976 00:09:41.976 --- 10.0.0.1 ping statistics --- 00:09:41.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.976 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:41.976 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:42.234 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:42.234 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:42.234 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:42.234 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:42.234 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3097317 00:09:42.234 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:42.234 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3097317 00:09:42.234 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 3097317 ']' 00:09:42.234 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.234 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:42.234 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.234 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:42.234 18:52:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:42.234 [2024-07-24 18:52:19.635742] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:09:42.234 [2024-07-24 18:52:19.635828] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.234 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.234 [2024-07-24 18:52:19.705478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.234 [2024-07-24 18:52:19.828526] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.234 [2024-07-24 18:52:19.828584] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.234 [2024-07-24 18:52:19.828601] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.234 [2024-07-24 18:52:19.828615] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.234 [2024-07-24 18:52:19.828627] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.234 [2024-07-24 18:52:19.828685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.234 [2024-07-24 18:52:19.828710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.234 [2024-07-24 18:52:19.828760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.234 [2024-07-24 18:52:19.828763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.168 [2024-07-24 18:52:20.642876] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.168 [2024-07-24 18:52:20.655051] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:43.168 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.427 18:52:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:43.427 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:43.427 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:43.427 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:43.427 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:43.427 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:43.427 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:43.685 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:43.942 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:43.942 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:43.942 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:43.942 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:43.942 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:43.942 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:44.200 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:44.458 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:44.458 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:44.458 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:44.458 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:44.458 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:44.458 18:52:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:44.458 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:44.458 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:44.458 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.458 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:44.458 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.458 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:44.458 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.458 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:44.458 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:44.715 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.715 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:44.715 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:44.715 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:44.715 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:44.715 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:44.715 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:44.715 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:44.715 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:44.715 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:44.715 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:44.715 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:44.715 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:44.715 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:44.716 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:44.716 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:44.716 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.716 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.716 rmmod nvme_tcp 00:09:44.716 rmmod nvme_fabrics 00:09:44.716 rmmod nvme_keyring 00:09:44.716 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:44.716 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:44.716 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:44.716 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3097317 ']' 00:09:44.716 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3097317 00:09:44.716 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 3097317 ']' 00:09:44.716 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 3097317 00:09:44.716 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:09:44.716 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:44.716 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3097317 00:09:44.716 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:44.716 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:44.716 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3097317' 00:09:44.716 killing process with pid 3097317 00:09:44.716 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 3097317 00:09:44.716 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 3097317 00:09:45.284 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:45.284 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:45.284 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:45.284 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:45.284 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:45.284 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.284 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.284 18:52:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:47.188 00:09:47.188 real 0m7.264s 00:09:47.188 user 0m12.413s 00:09:47.188 sys 0m2.138s 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:47.188 ************************************ 00:09:47.188 END TEST nvmf_referrals 00:09:47.188 ************************************ 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:47.188 ************************************ 00:09:47.188 START TEST nvmf_connect_disconnect 00:09:47.188 ************************************ 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:47.188 * Looking for test storage... 00:09:47.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:47.188 18:52:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:49.719 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:49.719 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:49.719 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:49.720 Found net devices under 0000:09:00.0: cvl_0_0 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:49.720 Found net devices under 0000:09:00.1: cvl_0_1 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:49.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:09:49.720 00:09:49.720 --- 10.0.0.2 ping statistics --- 00:09:49.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.720 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:09:49.720 00:09:49.720 --- 10.0.0.1 ping statistics --- 00:09:49.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.720 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3099618 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3099618 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 3099618 ']' 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:49.720 18:52:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:49.720 [2024-07-24 18:52:26.951783] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:09:49.720 [2024-07-24 18:52:26.951864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.720 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.720 [2024-07-24 18:52:27.017399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:49.720 [2024-07-24 18:52:27.130238] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.720 [2024-07-24 18:52:27.130290] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.720 [2024-07-24 18:52:27.130319] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.720 [2024-07-24 18:52:27.130330] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.720 [2024-07-24 18:52:27.130339] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.720 [2024-07-24 18:52:27.130408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.720 [2024-07-24 18:52:27.130488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.720 [2024-07-24 18:52:27.130540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.720 [2024-07-24 18:52:27.130537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:49.720 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.720 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:09:49.720 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:49.720 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:49.720 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:49.720 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.720 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:49.720 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.720 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:49.720 [2024-07-24 18:52:27.294697] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.720 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.720 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:49.720 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.721 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:49.978 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.978 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:49.978 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:49.978 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.978 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:49.978 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.978 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:49.978 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.978 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:49.978 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.978 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:49.978 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.978 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:49.978 [2024-07-24 18:52:27.352267] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:49.978 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.978 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:49.978 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:49.978 18:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:53.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.153 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:04.153 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:04.153 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:04.153 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:10:04.153 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:04.153 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:10:04.153 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:04.153 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:04.153 rmmod nvme_tcp 00:10:04.153 rmmod nvme_fabrics 00:10:04.153 rmmod nvme_keyring 00:10:04.153 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:04.153 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:10:04.153 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:10:04.153 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3099618 ']' 00:10:04.153 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3099618 00:10:04.153 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3099618 ']' 00:10:04.153 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 3099618 00:10:04.153 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:10:04.153 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:04.153 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3099618 00:10:04.153 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:04.154 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:04.154 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3099618' 00:10:04.154 killing process with pid 3099618 00:10:04.154 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 3099618 00:10:04.154 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 3099618 00:10:04.154 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:04.154 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:04.154 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:04.154 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:04.154 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:04.154 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.154 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.154 18:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:06.707 00:10:06.707 real 0m19.004s 00:10:06.707 user 0m57.290s 00:10:06.707 sys 0m3.319s 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:06.707 ************************************ 00:10:06.707 END TEST nvmf_connect_disconnect 00:10:06.707 ************************************ 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:06.707 ************************************ 00:10:06.707 START TEST nvmf_multitarget 00:10:06.707 ************************************ 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:06.707 * Looking for test storage... 00:10:06.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:10:06.707 18:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:08.607 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.607 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:10:08.607 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:08.607 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:08.607 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:08.607 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:08.607 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:08.607 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:10:08.607 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:08.607 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:10:08.607 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:10:08.607 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:10:08.607 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:10:08.607 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:10:08.607 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:10:08.607 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:08.608 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:08.608 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:08.608 Found net devices under 0000:09:00.0: cvl_0_0 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:08.608 Found net devices under 0000:09:00.1: cvl_0_1 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:08.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:10:08.608 00:10:08.608 --- 10.0.0.2 ping statistics --- 00:10:08.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.608 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:10:08.608 00:10:08.608 --- 10.0.0.1 ping statistics --- 00:10:08.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.608 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:08.608 18:52:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:08.608 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:08.608 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:08.608 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:08.608 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:08.608 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3103375 00:10:08.608 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:08.608 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3103375 00:10:08.608 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 3103375 ']' 00:10:08.608 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.609 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:08.609 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.609 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:08.609 18:52:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:08.609 [2024-07-24 18:52:46.058544] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:10:08.609 [2024-07-24 18:52:46.058636] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.609 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.609 [2024-07-24 18:52:46.127065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:08.866 [2024-07-24 18:52:46.249707] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.866 [2024-07-24 18:52:46.249766] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.866 [2024-07-24 18:52:46.249783] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.866 [2024-07-24 18:52:46.249797] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.866 [2024-07-24 18:52:46.249808] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.866 [2024-07-24 18:52:46.249877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.866 [2024-07-24 18:52:46.249931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.866 [2024-07-24 18:52:46.249984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.866 [2024-07-24 18:52:46.249987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.432 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:09.432 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:10:09.689 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:09.689 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:09.689 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:09.689 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.689 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:09.689 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:09.689 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:09.689 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:09.689 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:09.689 "nvmf_tgt_1" 00:10:09.689 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:09.946 "nvmf_tgt_2" 00:10:09.946 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:09.946 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:09.946 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:09.946 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:10.203 true 00:10:10.203 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:10.203 true 00:10:10.203 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:10.203 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:10.461 rmmod nvme_tcp 00:10:10.461 rmmod nvme_fabrics 00:10:10.461 rmmod nvme_keyring 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3103375 ']' 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3103375 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 3103375 ']' 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 3103375 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3103375 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3103375' 00:10:10.461 killing process with pid 3103375 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 3103375 00:10:10.461 18:52:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 3103375 00:10:10.718 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:10.718 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:10.718 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:10.718 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:10.718 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:10.718 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.718 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.718 18:52:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.254 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:13.254 00:10:13.254 real 0m6.513s 00:10:13.254 user 0m9.323s 00:10:13.254 sys 0m1.982s 00:10:13.254 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.254 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:13.254 ************************************ 00:10:13.254 END TEST nvmf_multitarget 00:10:13.254 ************************************ 00:10:13.254 18:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:13.254 18:52:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:13.254 18:52:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.254 18:52:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:13.254 ************************************ 00:10:13.254 START TEST nvmf_rpc 00:10:13.254 ************************************ 00:10:13.254 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:13.254 * Looking for test storage... 00:10:13.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.254 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.254 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:13.254 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.254 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.254 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.254 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.254 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.254 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.254 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.254 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.254 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.254 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:10:13.255 18:52:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:15.157 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:15.157 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.157 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:15.158 Found net devices under 0000:09:00.0: cvl_0_0 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:15.158 Found net devices under 0000:09:00.1: cvl_0_1 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:15.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:15.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:10:15.158 00:10:15.158 --- 10.0.0.2 ping statistics --- 00:10:15.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.158 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:15.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:15.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:10:15.158 00:10:15.158 --- 10.0.0.1 ping statistics --- 00:10:15.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.158 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3105599 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3105599 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 3105599 ']' 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:15.158 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.158 [2024-07-24 18:52:52.583376] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:10:15.158 [2024-07-24 18:52:52.583449] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.158 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.158 [2024-07-24 18:52:52.653499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.416 [2024-07-24 18:52:52.776815] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.416 [2024-07-24 18:52:52.776867] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.416 [2024-07-24 18:52:52.776883] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.416 [2024-07-24 18:52:52.776895] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.416 [2024-07-24 18:52:52.776907] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.416 [2024-07-24 18:52:52.776964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.416 [2024-07-24 18:52:52.776994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.416 [2024-07-24 18:52:52.777046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.416 [2024-07-24 18:52:52.777049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.416 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:15.416 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:15.416 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:15.416 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:15.416 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.416 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.416 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:15.416 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.416 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.416 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.416 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:15.416 "tick_rate": 2700000000, 00:10:15.416 "poll_groups": [ 00:10:15.416 { 00:10:15.416 "name": "nvmf_tgt_poll_group_000", 00:10:15.416 "admin_qpairs": 0, 00:10:15.416 "io_qpairs": 0, 00:10:15.416 "current_admin_qpairs": 0, 00:10:15.416 "current_io_qpairs": 0, 00:10:15.416 "pending_bdev_io": 0, 00:10:15.416 "completed_nvme_io": 0, 00:10:15.416 "transports": [] 00:10:15.416 }, 00:10:15.416 { 00:10:15.416 "name": "nvmf_tgt_poll_group_001", 00:10:15.416 "admin_qpairs": 0, 00:10:15.416 "io_qpairs": 0, 00:10:15.416 "current_admin_qpairs": 0, 00:10:15.416 "current_io_qpairs": 0, 00:10:15.416 "pending_bdev_io": 0, 00:10:15.416 "completed_nvme_io": 0, 00:10:15.416 "transports": [] 00:10:15.416 }, 00:10:15.416 { 00:10:15.417 "name": "nvmf_tgt_poll_group_002", 00:10:15.417 "admin_qpairs": 0, 00:10:15.417 "io_qpairs": 0, 00:10:15.417 "current_admin_qpairs": 0, 00:10:15.417 "current_io_qpairs": 0, 00:10:15.417 "pending_bdev_io": 0, 00:10:15.417 "completed_nvme_io": 0, 00:10:15.417 "transports": [] 00:10:15.417 }, 00:10:15.417 { 00:10:15.417 "name": "nvmf_tgt_poll_group_003", 00:10:15.417 "admin_qpairs": 0, 00:10:15.417 "io_qpairs": 0, 00:10:15.417 "current_admin_qpairs": 0, 00:10:15.417 "current_io_qpairs": 0, 00:10:15.417 "pending_bdev_io": 0, 00:10:15.417 "completed_nvme_io": 0, 00:10:15.417 "transports": [] 00:10:15.417 } 00:10:15.417 ] 00:10:15.417 }' 00:10:15.417 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:15.417 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:15.417 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:15.417 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:15.417 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:15.417 18:52:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:15.726 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:15.726 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:15.726 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.726 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.726 [2024-07-24 18:52:53.031035] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.726 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.726 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:15.726 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.726 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.726 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.726 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:15.726 "tick_rate": 2700000000, 00:10:15.726 "poll_groups": [ 00:10:15.726 { 00:10:15.726 "name": "nvmf_tgt_poll_group_000", 00:10:15.726 "admin_qpairs": 0, 00:10:15.726 "io_qpairs": 0, 00:10:15.726 "current_admin_qpairs": 0, 00:10:15.727 "current_io_qpairs": 0, 00:10:15.727 "pending_bdev_io": 0, 00:10:15.727 "completed_nvme_io": 0, 00:10:15.727 "transports": [ 00:10:15.727 { 00:10:15.727 "trtype": "TCP" 00:10:15.727 } 00:10:15.727 ] 00:10:15.727 }, 00:10:15.727 { 00:10:15.727 "name": "nvmf_tgt_poll_group_001", 00:10:15.727 "admin_qpairs": 0, 00:10:15.727 "io_qpairs": 0, 00:10:15.727 "current_admin_qpairs": 0, 00:10:15.727 "current_io_qpairs": 0, 00:10:15.727 "pending_bdev_io": 0, 00:10:15.727 "completed_nvme_io": 0, 00:10:15.727 "transports": [ 00:10:15.727 { 00:10:15.727 "trtype": "TCP" 00:10:15.727 } 00:10:15.727 ] 00:10:15.727 }, 00:10:15.727 { 00:10:15.727 "name": "nvmf_tgt_poll_group_002", 00:10:15.727 "admin_qpairs": 0, 00:10:15.727 "io_qpairs": 0, 00:10:15.727 "current_admin_qpairs": 0, 00:10:15.727 "current_io_qpairs": 0, 00:10:15.727 "pending_bdev_io": 0, 00:10:15.727 "completed_nvme_io": 0, 00:10:15.727 "transports": [ 00:10:15.727 { 00:10:15.727 "trtype": "TCP" 00:10:15.727 } 00:10:15.727 ] 00:10:15.727 }, 00:10:15.727 { 00:10:15.727 "name": "nvmf_tgt_poll_group_003", 00:10:15.727 "admin_qpairs": 0, 00:10:15.727 "io_qpairs": 0, 00:10:15.727 "current_admin_qpairs": 0, 00:10:15.727 "current_io_qpairs": 0, 00:10:15.727 "pending_bdev_io": 0, 00:10:15.727 "completed_nvme_io": 0, 00:10:15.727 "transports": [ 00:10:15.727 { 00:10:15.727 "trtype": "TCP" 00:10:15.727 } 00:10:15.727 ] 00:10:15.727 } 00:10:15.727 ] 00:10:15.727 }' 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.727 Malloc1 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.727 [2024-07-24 18:52:53.193191] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:10:15.727 [2024-07-24 18:52:53.215788] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:10:15.727 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:15.727 could not add new controller: failed to write to nvme-fabrics device 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.727 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.728 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.728 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:16.658 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:16.659 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:16.659 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:16.659 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:16.659 18:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:18.556 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:18.556 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:18.556 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:18.556 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:18.556 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:18.556 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:18.556 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:18.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.556 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:18.556 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:18.556 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:18.556 18:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:18.556 [2024-07-24 18:52:56.054920] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:10:18.556 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:18.556 could not add new controller: failed to write to nvme-fabrics device 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.556 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.557 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.557 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:19.123 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:19.123 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:19.123 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:19.123 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:19.123 18:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:21.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.651 [2024-07-24 18:52:58.838368] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.651 18:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:22.217 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:22.217 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:22.217 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:22.217 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:22.217 18:52:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:24.116 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:24.116 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:24.116 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:24.116 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:24.116 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:24.116 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:24.116 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:24.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.116 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:24.116 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:24.116 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.117 [2024-07-24 18:53:01.648987] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.117 18:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:25.050 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:25.050 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:25.050 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:25.050 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:25.050 18:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:26.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.948 [2024-07-24 18:53:04.496624] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.948 18:53:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:27.513 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:27.513 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:27.513 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:27.513 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:27.513 18:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:30.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.038 [2024-07-24 18:53:07.217255] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.038 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:30.656 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:30.656 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:30.656 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:30.656 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:30.656 18:53:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:32.554 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:32.554 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:32.554 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:32.554 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:32.554 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:32.554 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:32.554 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:32.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.554 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:32.554 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:32.554 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:32.554 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:32.554 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:32.554 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:32.554 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:32.554 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:32.554 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.554 18:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.554 [2024-07-24 18:53:10.027019] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.554 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:33.122 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:33.122 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:33.122 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:33.122 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:33.122 18:53:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:35.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.649 [2024-07-24 18:53:12.778272] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:35.649 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 [2024-07-24 18:53:12.826373] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 [2024-07-24 18:53:12.874549] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 [2024-07-24 18:53:12.922684] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 [2024-07-24 18:53:12.970839] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.650 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.651 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.651 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.651 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.651 18:53:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:35.651 "tick_rate": 2700000000, 00:10:35.651 "poll_groups": [ 00:10:35.651 { 00:10:35.651 "name": "nvmf_tgt_poll_group_000", 00:10:35.651 "admin_qpairs": 2, 00:10:35.651 "io_qpairs": 84, 00:10:35.651 "current_admin_qpairs": 0, 00:10:35.651 "current_io_qpairs": 0, 00:10:35.651 "pending_bdev_io": 0, 00:10:35.651 "completed_nvme_io": 185, 00:10:35.651 "transports": [ 00:10:35.651 { 00:10:35.651 "trtype": "TCP" 00:10:35.651 } 00:10:35.651 ] 00:10:35.651 }, 00:10:35.651 { 00:10:35.651 "name": "nvmf_tgt_poll_group_001", 00:10:35.651 "admin_qpairs": 2, 00:10:35.651 "io_qpairs": 84, 00:10:35.651 "current_admin_qpairs": 0, 00:10:35.651 "current_io_qpairs": 0, 00:10:35.651 "pending_bdev_io": 0, 00:10:35.651 "completed_nvme_io": 182, 00:10:35.651 "transports": [ 00:10:35.651 { 00:10:35.651 "trtype": "TCP" 00:10:35.651 } 00:10:35.651 ] 00:10:35.651 }, 00:10:35.651 { 00:10:35.651 "name": "nvmf_tgt_poll_group_002", 00:10:35.651 "admin_qpairs": 1, 00:10:35.651 "io_qpairs": 84, 00:10:35.651 "current_admin_qpairs": 0, 00:10:35.651 "current_io_qpairs": 0, 00:10:35.651 "pending_bdev_io": 0, 00:10:35.651 "completed_nvme_io": 135, 00:10:35.651 "transports": [ 00:10:35.651 { 00:10:35.651 "trtype": "TCP" 00:10:35.651 } 00:10:35.651 ] 00:10:35.651 }, 00:10:35.651 { 00:10:35.651 "name": "nvmf_tgt_poll_group_003", 00:10:35.651 "admin_qpairs": 2, 00:10:35.651 "io_qpairs": 84, 00:10:35.651 "current_admin_qpairs": 0, 00:10:35.651 "current_io_qpairs": 0, 00:10:35.651 "pending_bdev_io": 0, 00:10:35.651 "completed_nvme_io": 184, 00:10:35.651 "transports": [ 00:10:35.651 { 00:10:35.651 "trtype": "TCP" 00:10:35.651 } 00:10:35.651 ] 00:10:35.651 } 00:10:35.651 ] 00:10:35.651 }' 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:35.651 rmmod nvme_tcp 00:10:35.651 rmmod nvme_fabrics 00:10:35.651 rmmod nvme_keyring 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3105599 ']' 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3105599 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 3105599 ']' 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 3105599 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3105599 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3105599' 00:10:35.651 killing process with pid 3105599 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 3105599 00:10:35.651 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 3105599 00:10:35.911 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:35.911 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:35.911 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:35.911 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:35.911 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:35.911 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.911 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.911 18:53:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:38.442 00:10:38.442 real 0m25.211s 00:10:38.442 user 1m21.784s 00:10:38.442 sys 0m4.150s 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:38.442 ************************************ 00:10:38.442 END TEST nvmf_rpc 00:10:38.442 ************************************ 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:38.442 ************************************ 00:10:38.442 START TEST nvmf_invalid 00:10:38.442 ************************************ 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:38.442 * Looking for test storage... 00:10:38.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.442 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:10:38.443 18:53:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:40.343 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.343 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:40.343 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:40.343 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:40.343 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:40.343 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:40.343 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:40.343 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:40.343 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:40.343 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:40.343 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:40.343 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:40.344 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:40.344 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:40.344 Found net devices under 0000:09:00.0: cvl_0_0 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:40.344 Found net devices under 0000:09:00.1: cvl_0_1 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:40.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:10:40.344 00:10:40.344 --- 10.0.0.2 ping statistics --- 00:10:40.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.344 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:10:40.344 00:10:40.344 --- 10.0.0.1 ping statistics --- 00:10:40.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.344 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3110087 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3110087 00:10:40.344 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 3110087 ']' 00:10:40.345 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.345 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.345 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.345 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.345 18:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:40.345 [2024-07-24 18:53:17.870937] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:10:40.345 [2024-07-24 18:53:17.871008] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.345 EAL: No free 2048 kB hugepages reported on node 1 00:10:40.345 [2024-07-24 18:53:17.939302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.603 [2024-07-24 18:53:18.067600] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.603 [2024-07-24 18:53:18.067664] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.603 [2024-07-24 18:53:18.067681] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.603 [2024-07-24 18:53:18.067694] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.603 [2024-07-24 18:53:18.067705] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.603 [2024-07-24 18:53:18.067789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.603 [2024-07-24 18:53:18.067845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.603 [2024-07-24 18:53:18.067896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.603 [2024-07-24 18:53:18.067899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.603 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:40.603 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:10:40.603 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:40.603 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:40.603 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:40.860 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.860 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:40.860 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode19192 00:10:41.116 [2024-07-24 18:53:18.502561] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:41.116 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:41.116 { 00:10:41.116 "nqn": "nqn.2016-06.io.spdk:cnode19192", 00:10:41.116 "tgt_name": "foobar", 00:10:41.116 "method": "nvmf_create_subsystem", 00:10:41.116 "req_id": 1 00:10:41.116 } 00:10:41.116 Got JSON-RPC error response 00:10:41.116 response: 00:10:41.116 { 00:10:41.116 "code": -32603, 00:10:41.116 "message": "Unable to find target foobar" 00:10:41.116 }' 00:10:41.116 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:41.116 { 00:10:41.116 "nqn": "nqn.2016-06.io.spdk:cnode19192", 00:10:41.116 "tgt_name": "foobar", 00:10:41.116 "method": "nvmf_create_subsystem", 00:10:41.116 "req_id": 1 00:10:41.116 } 00:10:41.116 Got JSON-RPC error response 00:10:41.116 response: 00:10:41.116 { 00:10:41.116 "code": -32603, 00:10:41.116 "message": "Unable to find target foobar" 00:10:41.116 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:41.116 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:41.116 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15587 00:10:41.373 [2024-07-24 18:53:18.771485] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15587: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:41.373 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:41.373 { 00:10:41.373 "nqn": "nqn.2016-06.io.spdk:cnode15587", 00:10:41.373 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:41.373 "method": "nvmf_create_subsystem", 00:10:41.373 "req_id": 1 00:10:41.373 } 00:10:41.373 Got JSON-RPC error response 00:10:41.373 response: 00:10:41.373 { 00:10:41.373 "code": -32602, 00:10:41.373 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:41.373 }' 00:10:41.373 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:41.373 { 00:10:41.373 "nqn": "nqn.2016-06.io.spdk:cnode15587", 00:10:41.373 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:41.373 "method": "nvmf_create_subsystem", 00:10:41.373 "req_id": 1 00:10:41.373 } 00:10:41.373 Got JSON-RPC error response 00:10:41.373 response: 00:10:41.373 { 00:10:41.373 "code": -32602, 00:10:41.373 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:41.373 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:41.373 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:41.373 18:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode21240 00:10:41.631 [2024-07-24 18:53:19.016272] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21240: invalid model number 'SPDK_Controller' 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:41.631 { 00:10:41.631 "nqn": "nqn.2016-06.io.spdk:cnode21240", 00:10:41.631 "model_number": "SPDK_Controller\u001f", 00:10:41.631 "method": "nvmf_create_subsystem", 00:10:41.631 "req_id": 1 00:10:41.631 } 00:10:41.631 Got JSON-RPC error response 00:10:41.631 response: 00:10:41.631 { 00:10:41.631 "code": -32602, 00:10:41.631 "message": "Invalid MN SPDK_Controller\u001f" 00:10:41.631 }' 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:41.631 { 00:10:41.631 "nqn": "nqn.2016-06.io.spdk:cnode21240", 00:10:41.631 "model_number": "SPDK_Controller\u001f", 00:10:41.631 "method": "nvmf_create_subsystem", 00:10:41.631 "req_id": 1 00:10:41.631 } 00:10:41.631 Got JSON-RPC error response 00:10:41.631 response: 00:10:41.631 { 00:10:41.631 "code": -32602, 00:10:41.631 "message": "Invalid MN SPDK_Controller\u001f" 00:10:41.631 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:10:41.631 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ + == \- ]] 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '+G5Pa],O-R=@N(ai-1oJk' 00:10:41.632 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '+G5Pa],O-R=@N(ai-1oJk' nqn.2016-06.io.spdk:cnode14062 00:10:41.890 [2024-07-24 18:53:19.341391] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14062: invalid serial number '+G5Pa],O-R=@N(ai-1oJk' 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:41.890 { 00:10:41.890 "nqn": "nqn.2016-06.io.spdk:cnode14062", 00:10:41.890 "serial_number": "+G5Pa],O-R=@N(ai-1oJk", 00:10:41.890 "method": "nvmf_create_subsystem", 00:10:41.890 "req_id": 1 00:10:41.890 } 00:10:41.890 Got JSON-RPC error response 00:10:41.890 response: 00:10:41.890 { 00:10:41.890 "code": -32602, 00:10:41.890 "message": "Invalid SN +G5Pa],O-R=@N(ai-1oJk" 00:10:41.890 }' 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:41.890 { 00:10:41.890 "nqn": "nqn.2016-06.io.spdk:cnode14062", 00:10:41.890 "serial_number": "+G5Pa],O-R=@N(ai-1oJk", 00:10:41.890 "method": "nvmf_create_subsystem", 00:10:41.890 "req_id": 1 00:10:41.890 } 00:10:41.890 Got JSON-RPC error response 00:10:41.890 response: 00:10:41.890 { 00:10:41.890 "code": -32602, 00:10:41.890 "message": "Invalid SN +G5Pa],O-R=@N(ai-1oJk" 00:10:41.890 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.890 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:41.891 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:41.892 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:42.149 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ } == \- ]] 00:10:42.150 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '}X=)R$~mLbez( `T1KU'\'';&Kb:45Lm^M:!gP,7:K%' 00:10:42.150 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '}X=)R$~mLbez( `T1KU'\'';&Kb:45Lm^M:!gP,7:K%' nqn.2016-06.io.spdk:cnode31797 00:10:42.150 [2024-07-24 18:53:19.750734] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31797: invalid model number '}X=)R$~mLbez( `T1KU';&Kb:45Lm^M:!gP,7:K%' 00:10:42.407 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:10:42.407 { 00:10:42.407 "nqn": "nqn.2016-06.io.spdk:cnode31797", 00:10:42.407 "model_number": "}X=)R$~mLbez( `\u007fT1KU'\'';&Kb:45Lm^M:!gP,7:K%", 00:10:42.407 "method": "nvmf_create_subsystem", 00:10:42.407 "req_id": 1 00:10:42.407 } 00:10:42.407 Got JSON-RPC error response 00:10:42.407 response: 00:10:42.407 { 00:10:42.407 "code": -32602, 00:10:42.407 "message": "Invalid MN }X=)R$~mLbez( `\u007fT1KU'\'';&Kb:45Lm^M:!gP,7:K%" 00:10:42.407 }' 00:10:42.407 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:10:42.407 { 00:10:42.407 "nqn": "nqn.2016-06.io.spdk:cnode31797", 00:10:42.407 "model_number": "}X=)R$~mLbez( `\u007fT1KU';&Kb:45Lm^M:!gP,7:K%", 00:10:42.407 "method": "nvmf_create_subsystem", 00:10:42.407 "req_id": 1 00:10:42.407 } 00:10:42.407 Got JSON-RPC error response 00:10:42.407 response: 00:10:42.407 { 00:10:42.407 "code": -32602, 00:10:42.407 "message": "Invalid MN }X=)R$~mLbez( `\u007fT1KU';&Kb:45Lm^M:!gP,7:K%" 00:10:42.407 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:42.407 18:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:42.407 [2024-07-24 18:53:20.007674] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.665 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:42.922 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:42.922 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:10:42.922 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:42.922 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:10:42.922 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:42.922 [2024-07-24 18:53:20.493256] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:42.922 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:10:42.922 { 00:10:42.922 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:42.922 "listen_address": { 00:10:42.922 "trtype": "tcp", 00:10:42.922 "traddr": "", 00:10:42.922 "trsvcid": "4421" 00:10:42.922 }, 00:10:42.922 "method": "nvmf_subsystem_remove_listener", 00:10:42.922 "req_id": 1 00:10:42.922 } 00:10:42.922 Got JSON-RPC error response 00:10:42.922 response: 00:10:42.922 { 00:10:42.922 "code": -32602, 00:10:42.922 "message": "Invalid parameters" 00:10:42.922 }' 00:10:42.922 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:10:42.922 { 00:10:42.922 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:42.922 "listen_address": { 00:10:42.922 "trtype": "tcp", 00:10:42.922 "traddr": "", 00:10:42.922 "trsvcid": "4421" 00:10:42.922 }, 00:10:42.922 "method": "nvmf_subsystem_remove_listener", 00:10:42.922 "req_id": 1 00:10:42.922 } 00:10:42.922 Got JSON-RPC error response 00:10:42.922 response: 00:10:42.922 { 00:10:42.922 "code": -32602, 00:10:42.922 "message": "Invalid parameters" 00:10:42.922 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:42.922 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16656 -i 0 00:10:43.179 [2024-07-24 18:53:20.758033] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16656: invalid cntlid range [0-65519] 00:10:43.179 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:10:43.179 { 00:10:43.179 "nqn": "nqn.2016-06.io.spdk:cnode16656", 00:10:43.179 "min_cntlid": 0, 00:10:43.179 "method": "nvmf_create_subsystem", 00:10:43.179 "req_id": 1 00:10:43.179 } 00:10:43.179 Got JSON-RPC error response 00:10:43.179 response: 00:10:43.179 { 00:10:43.179 "code": -32602, 00:10:43.179 "message": "Invalid cntlid range [0-65519]" 00:10:43.179 }' 00:10:43.179 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:10:43.179 { 00:10:43.179 "nqn": "nqn.2016-06.io.spdk:cnode16656", 00:10:43.179 "min_cntlid": 0, 00:10:43.179 "method": "nvmf_create_subsystem", 00:10:43.179 "req_id": 1 00:10:43.179 } 00:10:43.179 Got JSON-RPC error response 00:10:43.179 response: 00:10:43.179 { 00:10:43.179 "code": -32602, 00:10:43.179 "message": "Invalid cntlid range [0-65519]" 00:10:43.179 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:43.436 18:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3132 -i 65520 00:10:43.436 [2024-07-24 18:53:21.010872] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3132: invalid cntlid range [65520-65519] 00:10:43.436 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:10:43.436 { 00:10:43.436 "nqn": "nqn.2016-06.io.spdk:cnode3132", 00:10:43.436 "min_cntlid": 65520, 00:10:43.436 "method": "nvmf_create_subsystem", 00:10:43.436 "req_id": 1 00:10:43.436 } 00:10:43.436 Got JSON-RPC error response 00:10:43.436 response: 00:10:43.436 { 00:10:43.436 "code": -32602, 00:10:43.436 "message": "Invalid cntlid range [65520-65519]" 00:10:43.436 }' 00:10:43.436 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:10:43.436 { 00:10:43.436 "nqn": "nqn.2016-06.io.spdk:cnode3132", 00:10:43.436 "min_cntlid": 65520, 00:10:43.436 "method": "nvmf_create_subsystem", 00:10:43.436 "req_id": 1 00:10:43.436 } 00:10:43.436 Got JSON-RPC error response 00:10:43.436 response: 00:10:43.436 { 00:10:43.436 "code": -32602, 00:10:43.436 "message": "Invalid cntlid range [65520-65519]" 00:10:43.436 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:43.436 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11007 -I 0 00:10:43.693 [2024-07-24 18:53:21.251719] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11007: invalid cntlid range [1-0] 00:10:43.693 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:10:43.693 { 00:10:43.693 "nqn": "nqn.2016-06.io.spdk:cnode11007", 00:10:43.693 "max_cntlid": 0, 00:10:43.693 "method": "nvmf_create_subsystem", 00:10:43.693 "req_id": 1 00:10:43.693 } 00:10:43.693 Got JSON-RPC error response 00:10:43.693 response: 00:10:43.693 { 00:10:43.693 "code": -32602, 00:10:43.693 "message": "Invalid cntlid range [1-0]" 00:10:43.693 }' 00:10:43.693 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:10:43.693 { 00:10:43.693 "nqn": "nqn.2016-06.io.spdk:cnode11007", 00:10:43.693 "max_cntlid": 0, 00:10:43.693 "method": "nvmf_create_subsystem", 00:10:43.694 "req_id": 1 00:10:43.694 } 00:10:43.694 Got JSON-RPC error response 00:10:43.694 response: 00:10:43.694 { 00:10:43.694 "code": -32602, 00:10:43.694 "message": "Invalid cntlid range [1-0]" 00:10:43.694 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:43.694 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6937 -I 65520 00:10:43.951 [2024-07-24 18:53:21.500484] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6937: invalid cntlid range [1-65520] 00:10:43.951 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:10:43.951 { 00:10:43.951 "nqn": "nqn.2016-06.io.spdk:cnode6937", 00:10:43.951 "max_cntlid": 65520, 00:10:43.951 "method": "nvmf_create_subsystem", 00:10:43.951 "req_id": 1 00:10:43.951 } 00:10:43.951 Got JSON-RPC error response 00:10:43.951 response: 00:10:43.951 { 00:10:43.951 "code": -32602, 00:10:43.951 "message": "Invalid cntlid range [1-65520]" 00:10:43.951 }' 00:10:43.951 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:10:43.951 { 00:10:43.951 "nqn": "nqn.2016-06.io.spdk:cnode6937", 00:10:43.951 "max_cntlid": 65520, 00:10:43.951 "method": "nvmf_create_subsystem", 00:10:43.951 "req_id": 1 00:10:43.951 } 00:10:43.951 Got JSON-RPC error response 00:10:43.951 response: 00:10:43.951 { 00:10:43.951 "code": -32602, 00:10:43.951 "message": "Invalid cntlid range [1-65520]" 00:10:43.951 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:43.951 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17778 -i 6 -I 5 00:10:44.208 [2024-07-24 18:53:21.741317] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17778: invalid cntlid range [6-5] 00:10:44.208 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:10:44.208 { 00:10:44.208 "nqn": "nqn.2016-06.io.spdk:cnode17778", 00:10:44.208 "min_cntlid": 6, 00:10:44.208 "max_cntlid": 5, 00:10:44.208 "method": "nvmf_create_subsystem", 00:10:44.208 "req_id": 1 00:10:44.208 } 00:10:44.208 Got JSON-RPC error response 00:10:44.208 response: 00:10:44.208 { 00:10:44.208 "code": -32602, 00:10:44.208 "message": "Invalid cntlid range [6-5]" 00:10:44.208 }' 00:10:44.208 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:10:44.208 { 00:10:44.208 "nqn": "nqn.2016-06.io.spdk:cnode17778", 00:10:44.208 "min_cntlid": 6, 00:10:44.208 "max_cntlid": 5, 00:10:44.208 "method": "nvmf_create_subsystem", 00:10:44.208 "req_id": 1 00:10:44.208 } 00:10:44.208 Got JSON-RPC error response 00:10:44.208 response: 00:10:44.208 { 00:10:44.208 "code": -32602, 00:10:44.208 "message": "Invalid cntlid range [6-5]" 00:10:44.209 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:44.209 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:44.466 { 00:10:44.466 "name": "foobar", 00:10:44.466 "method": "nvmf_delete_target", 00:10:44.466 "req_id": 1 00:10:44.466 } 00:10:44.466 Got JSON-RPC error response 00:10:44.466 response: 00:10:44.466 { 00:10:44.466 "code": -32602, 00:10:44.466 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:44.466 }' 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:44.466 { 00:10:44.466 "name": "foobar", 00:10:44.466 "method": "nvmf_delete_target", 00:10:44.466 "req_id": 1 00:10:44.466 } 00:10:44.466 Got JSON-RPC error response 00:10:44.466 response: 00:10:44.466 { 00:10:44.466 "code": -32602, 00:10:44.466 "message": "The specified target doesn't exist, cannot delete it." 00:10:44.466 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:44.466 rmmod nvme_tcp 00:10:44.466 rmmod nvme_fabrics 00:10:44.466 rmmod nvme_keyring 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3110087 ']' 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3110087 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 3110087 ']' 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 3110087 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3110087 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3110087' 00:10:44.466 killing process with pid 3110087 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 3110087 00:10:44.466 18:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 3110087 00:10:44.725 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:44.725 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:44.725 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:44.725 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:44.725 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:44.725 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.725 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.725 18:53:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:47.275 00:10:47.275 real 0m8.732s 00:10:47.275 user 0m20.194s 00:10:47.275 sys 0m2.486s 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:47.275 ************************************ 00:10:47.275 END TEST nvmf_invalid 00:10:47.275 ************************************ 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:47.275 ************************************ 00:10:47.275 START TEST nvmf_connect_stress 00:10:47.275 ************************************ 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:47.275 * Looking for test storage... 00:10:47.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:47.275 18:53:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:49.175 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:49.175 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:49.175 Found net devices under 0000:09:00.0: cvl_0_0 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:49.175 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:49.176 Found net devices under 0000:09:00.1: cvl_0_1 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:49.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:10:49.176 00:10:49.176 --- 10.0.0.2 ping statistics --- 00:10:49.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.176 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:49.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:10:49.176 00:10:49.176 --- 10.0.0.1 ping statistics --- 00:10:49.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.176 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3112712 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3112712 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 3112712 ']' 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:49.176 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.176 [2024-07-24 18:53:26.671702] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:10:49.176 [2024-07-24 18:53:26.671793] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.176 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.176 [2024-07-24 18:53:26.741572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:49.434 [2024-07-24 18:53:26.852918] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.434 [2024-07-24 18:53:26.852981] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.434 [2024-07-24 18:53:26.853010] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.434 [2024-07-24 18:53:26.853021] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.434 [2024-07-24 18:53:26.853031] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.434 [2024-07-24 18:53:26.853095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.434 [2024-07-24 18:53:26.854124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.434 [2024-07-24 18:53:26.854136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.434 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:49.434 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:10:49.434 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:49.434 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:49.434 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.434 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.434 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:49.435 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.435 18:53:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.435 [2024-07-24 18:53:26.988664] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.435 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.435 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:49.435 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.435 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.435 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.435 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.435 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.435 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.435 [2024-07-24 18:53:27.018055] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.435 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.435 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:49.435 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.435 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.435 NULL1 00:10:49.435 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.435 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3112748 00:10:49.435 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:49.435 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:49.435 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:49.435 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:49.692 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.692 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.692 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.692 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.693 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.693 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.950 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.950 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:49.950 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.950 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.950 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.208 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.208 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:50.208 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.208 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.208 18:53:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.465 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.465 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:50.465 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.465 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.465 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.029 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.029 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:51.029 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.029 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.029 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.286 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.286 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:51.286 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.286 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.286 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.544 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.544 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:51.544 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.544 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.544 18:53:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.801 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.801 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:51.801 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.801 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.801 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.059 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.059 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:52.059 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.059 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.059 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.692 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.692 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:52.692 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.692 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.692 18:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.949 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.949 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:52.949 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.949 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.949 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.207 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.207 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:53.207 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.207 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.207 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.464 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.464 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:53.464 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.464 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.464 18:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.722 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.722 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:53.722 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.722 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.722 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.980 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.980 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:53.980 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.980 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.980 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.544 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.544 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:54.544 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.544 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.544 18:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.802 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.802 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:54.802 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.802 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.802 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.059 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.059 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:55.059 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.059 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.059 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.316 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.316 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:55.316 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.316 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.316 18:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.574 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.574 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:55.574 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.574 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.574 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.139 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.139 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:56.139 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.139 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.139 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.397 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.397 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:56.397 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.397 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.397 18:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.654 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.654 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:56.654 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.654 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.654 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.910 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.910 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:56.911 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.911 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.911 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.475 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.475 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:57.475 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.475 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.475 18:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.732 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.732 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:57.732 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.732 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.732 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.990 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.990 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:57.990 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.990 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.990 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.247 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.247 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:58.247 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.247 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.247 18:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.505 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.505 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:58.505 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.505 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.505 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.069 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.069 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:59.069 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.069 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.069 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.326 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.326 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:59.326 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.326 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.326 18:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.584 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.584 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:59.584 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.584 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.584 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.840 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3112748 00:10:59.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3112748) - No such process 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3112748 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:59.840 rmmod nvme_tcp 00:10:59.840 rmmod nvme_fabrics 00:10:59.840 rmmod nvme_keyring 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3112712 ']' 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3112712 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 3112712 ']' 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 3112712 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3112712 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:59.840 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3112712' 00:10:59.840 killing process with pid 3112712 00:10:59.841 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 3112712 00:10:59.841 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 3112712 00:11:00.406 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:00.406 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:00.406 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:00.406 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:00.406 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:00.406 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.406 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.406 18:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:02.309 00:11:02.309 real 0m15.420s 00:11:02.309 user 0m38.334s 00:11:02.309 sys 0m6.080s 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.309 ************************************ 00:11:02.309 END TEST nvmf_connect_stress 00:11:02.309 ************************************ 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:02.309 ************************************ 00:11:02.309 START TEST nvmf_fused_ordering 00:11:02.309 ************************************ 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:02.309 * Looking for test storage... 00:11:02.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.309 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:02.310 18:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:04.844 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:04.844 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:04.844 Found net devices under 0000:09:00.0: cvl_0_0 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:04.844 Found net devices under 0000:09:00.1: cvl_0_1 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.844 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:04.845 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:04.845 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:04.845 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:04.845 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:04.845 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.845 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.845 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.845 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:04.845 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.845 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.845 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:04.845 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.845 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.845 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:04.845 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:04.845 18:53:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:04.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:11:04.845 00:11:04.845 --- 10.0.0.2 ping statistics --- 00:11:04.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.845 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:11:04.845 00:11:04.845 --- 10.0.0.1 ping statistics --- 00:11:04.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.845 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3115926 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3115926 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 3115926 ']' 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:04.845 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:04.845 [2024-07-24 18:53:42.223811] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:11:04.845 [2024-07-24 18:53:42.223896] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.845 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.845 [2024-07-24 18:53:42.288964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.845 [2024-07-24 18:53:42.394093] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.845 [2024-07-24 18:53:42.394161] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.845 [2024-07-24 18:53:42.394189] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.845 [2024-07-24 18:53:42.394200] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.845 [2024-07-24 18:53:42.394211] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.845 [2024-07-24 18:53:42.394238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:05.104 [2024-07-24 18:53:42.531018] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:05.104 [2024-07-24 18:53:42.547227] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:05.104 NULL1 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.104 18:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:05.104 [2024-07-24 18:53:42.592500] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:11:05.104 [2024-07-24 18:53:42.592544] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3116028 ] 00:11:05.104 EAL: No free 2048 kB hugepages reported on node 1 00:11:05.669 Attached to nqn.2016-06.io.spdk:cnode1 00:11:05.669 Namespace ID: 1 size: 1GB 00:11:05.669 fused_ordering(0) 00:11:05.669 fused_ordering(1) 00:11:05.669 fused_ordering(2) 00:11:05.669 fused_ordering(3) 00:11:05.669 fused_ordering(4) 00:11:05.669 fused_ordering(5) 00:11:05.669 fused_ordering(6) 00:11:05.669 fused_ordering(7) 00:11:05.669 fused_ordering(8) 00:11:05.669 fused_ordering(9) 00:11:05.669 fused_ordering(10) 00:11:05.669 fused_ordering(11) 00:11:05.669 fused_ordering(12) 00:11:05.669 fused_ordering(13) 00:11:05.669 fused_ordering(14) 00:11:05.669 fused_ordering(15) 00:11:05.669 fused_ordering(16) 00:11:05.669 fused_ordering(17) 00:11:05.669 fused_ordering(18) 00:11:05.669 fused_ordering(19) 00:11:05.669 fused_ordering(20) 00:11:05.669 fused_ordering(21) 00:11:05.669 fused_ordering(22) 00:11:05.669 fused_ordering(23) 00:11:05.669 fused_ordering(24) 00:11:05.669 fused_ordering(25) 00:11:05.669 fused_ordering(26) 00:11:05.669 fused_ordering(27) 00:11:05.669 fused_ordering(28) 00:11:05.669 fused_ordering(29) 00:11:05.669 fused_ordering(30) 00:11:05.669 fused_ordering(31) 00:11:05.669 fused_ordering(32) 00:11:05.669 fused_ordering(33) 00:11:05.669 fused_ordering(34) 00:11:05.669 fused_ordering(35) 00:11:05.669 fused_ordering(36) 00:11:05.669 fused_ordering(37) 00:11:05.669 fused_ordering(38) 00:11:05.669 fused_ordering(39) 00:11:05.669 fused_ordering(40) 00:11:05.669 fused_ordering(41) 00:11:05.669 fused_ordering(42) 00:11:05.669 fused_ordering(43) 00:11:05.669 fused_ordering(44) 00:11:05.669 fused_ordering(45) 00:11:05.669 fused_ordering(46) 00:11:05.669 fused_ordering(47) 00:11:05.669 fused_ordering(48) 00:11:05.669 fused_ordering(49) 00:11:05.669 fused_ordering(50) 00:11:05.669 fused_ordering(51) 00:11:05.669 fused_ordering(52) 00:11:05.669 fused_ordering(53) 00:11:05.669 fused_ordering(54) 00:11:05.669 fused_ordering(55) 00:11:05.669 fused_ordering(56) 00:11:05.669 fused_ordering(57) 00:11:05.669 fused_ordering(58) 00:11:05.669 fused_ordering(59) 00:11:05.669 fused_ordering(60) 00:11:05.669 fused_ordering(61) 00:11:05.669 fused_ordering(62) 00:11:05.669 fused_ordering(63) 00:11:05.669 fused_ordering(64) 00:11:05.669 fused_ordering(65) 00:11:05.669 fused_ordering(66) 00:11:05.669 fused_ordering(67) 00:11:05.669 fused_ordering(68) 00:11:05.669 fused_ordering(69) 00:11:05.669 fused_ordering(70) 00:11:05.669 fused_ordering(71) 00:11:05.669 fused_ordering(72) 00:11:05.669 fused_ordering(73) 00:11:05.669 fused_ordering(74) 00:11:05.669 fused_ordering(75) 00:11:05.669 fused_ordering(76) 00:11:05.669 fused_ordering(77) 00:11:05.669 fused_ordering(78) 00:11:05.669 fused_ordering(79) 00:11:05.669 fused_ordering(80) 00:11:05.669 fused_ordering(81) 00:11:05.669 fused_ordering(82) 00:11:05.669 fused_ordering(83) 00:11:05.669 fused_ordering(84) 00:11:05.669 fused_ordering(85) 00:11:05.669 fused_ordering(86) 00:11:05.669 fused_ordering(87) 00:11:05.669 fused_ordering(88) 00:11:05.669 fused_ordering(89) 00:11:05.669 fused_ordering(90) 00:11:05.669 fused_ordering(91) 00:11:05.669 fused_ordering(92) 00:11:05.669 fused_ordering(93) 00:11:05.669 fused_ordering(94) 00:11:05.669 fused_ordering(95) 00:11:05.669 fused_ordering(96) 00:11:05.669 fused_ordering(97) 00:11:05.669 fused_ordering(98) 00:11:05.669 fused_ordering(99) 00:11:05.669 fused_ordering(100) 00:11:05.669 fused_ordering(101) 00:11:05.669 fused_ordering(102) 00:11:05.669 fused_ordering(103) 00:11:05.669 fused_ordering(104) 00:11:05.669 fused_ordering(105) 00:11:05.669 fused_ordering(106) 00:11:05.669 fused_ordering(107) 00:11:05.669 fused_ordering(108) 00:11:05.669 fused_ordering(109) 00:11:05.669 fused_ordering(110) 00:11:05.669 fused_ordering(111) 00:11:05.669 fused_ordering(112) 00:11:05.669 fused_ordering(113) 00:11:05.669 fused_ordering(114) 00:11:05.669 fused_ordering(115) 00:11:05.669 fused_ordering(116) 00:11:05.669 fused_ordering(117) 00:11:05.669 fused_ordering(118) 00:11:05.669 fused_ordering(119) 00:11:05.669 fused_ordering(120) 00:11:05.669 fused_ordering(121) 00:11:05.669 fused_ordering(122) 00:11:05.669 fused_ordering(123) 00:11:05.669 fused_ordering(124) 00:11:05.669 fused_ordering(125) 00:11:05.669 fused_ordering(126) 00:11:05.669 fused_ordering(127) 00:11:05.669 fused_ordering(128) 00:11:05.669 fused_ordering(129) 00:11:05.669 fused_ordering(130) 00:11:05.669 fused_ordering(131) 00:11:05.669 fused_ordering(132) 00:11:05.669 fused_ordering(133) 00:11:05.669 fused_ordering(134) 00:11:05.669 fused_ordering(135) 00:11:05.669 fused_ordering(136) 00:11:05.669 fused_ordering(137) 00:11:05.669 fused_ordering(138) 00:11:05.669 fused_ordering(139) 00:11:05.669 fused_ordering(140) 00:11:05.669 fused_ordering(141) 00:11:05.669 fused_ordering(142) 00:11:05.669 fused_ordering(143) 00:11:05.669 fused_ordering(144) 00:11:05.669 fused_ordering(145) 00:11:05.669 fused_ordering(146) 00:11:05.669 fused_ordering(147) 00:11:05.669 fused_ordering(148) 00:11:05.669 fused_ordering(149) 00:11:05.669 fused_ordering(150) 00:11:05.669 fused_ordering(151) 00:11:05.669 fused_ordering(152) 00:11:05.669 fused_ordering(153) 00:11:05.669 fused_ordering(154) 00:11:05.669 fused_ordering(155) 00:11:05.669 fused_ordering(156) 00:11:05.669 fused_ordering(157) 00:11:05.669 fused_ordering(158) 00:11:05.669 fused_ordering(159) 00:11:05.669 fused_ordering(160) 00:11:05.669 fused_ordering(161) 00:11:05.669 fused_ordering(162) 00:11:05.669 fused_ordering(163) 00:11:05.669 fused_ordering(164) 00:11:05.669 fused_ordering(165) 00:11:05.669 fused_ordering(166) 00:11:05.669 fused_ordering(167) 00:11:05.669 fused_ordering(168) 00:11:05.669 fused_ordering(169) 00:11:05.669 fused_ordering(170) 00:11:05.669 fused_ordering(171) 00:11:05.669 fused_ordering(172) 00:11:05.669 fused_ordering(173) 00:11:05.669 fused_ordering(174) 00:11:05.669 fused_ordering(175) 00:11:05.669 fused_ordering(176) 00:11:05.669 fused_ordering(177) 00:11:05.669 fused_ordering(178) 00:11:05.669 fused_ordering(179) 00:11:05.669 fused_ordering(180) 00:11:05.669 fused_ordering(181) 00:11:05.669 fused_ordering(182) 00:11:05.669 fused_ordering(183) 00:11:05.669 fused_ordering(184) 00:11:05.669 fused_ordering(185) 00:11:05.669 fused_ordering(186) 00:11:05.669 fused_ordering(187) 00:11:05.669 fused_ordering(188) 00:11:05.669 fused_ordering(189) 00:11:05.669 fused_ordering(190) 00:11:05.669 fused_ordering(191) 00:11:05.670 fused_ordering(192) 00:11:05.670 fused_ordering(193) 00:11:05.670 fused_ordering(194) 00:11:05.670 fused_ordering(195) 00:11:05.670 fused_ordering(196) 00:11:05.670 fused_ordering(197) 00:11:05.670 fused_ordering(198) 00:11:05.670 fused_ordering(199) 00:11:05.670 fused_ordering(200) 00:11:05.670 fused_ordering(201) 00:11:05.670 fused_ordering(202) 00:11:05.670 fused_ordering(203) 00:11:05.670 fused_ordering(204) 00:11:05.670 fused_ordering(205) 00:11:06.234 fused_ordering(206) 00:11:06.234 fused_ordering(207) 00:11:06.234 fused_ordering(208) 00:11:06.234 fused_ordering(209) 00:11:06.234 fused_ordering(210) 00:11:06.234 fused_ordering(211) 00:11:06.234 fused_ordering(212) 00:11:06.234 fused_ordering(213) 00:11:06.234 fused_ordering(214) 00:11:06.234 fused_ordering(215) 00:11:06.234 fused_ordering(216) 00:11:06.234 fused_ordering(217) 00:11:06.234 fused_ordering(218) 00:11:06.234 fused_ordering(219) 00:11:06.234 fused_ordering(220) 00:11:06.234 fused_ordering(221) 00:11:06.234 fused_ordering(222) 00:11:06.234 fused_ordering(223) 00:11:06.234 fused_ordering(224) 00:11:06.234 fused_ordering(225) 00:11:06.234 fused_ordering(226) 00:11:06.234 fused_ordering(227) 00:11:06.234 fused_ordering(228) 00:11:06.234 fused_ordering(229) 00:11:06.234 fused_ordering(230) 00:11:06.234 fused_ordering(231) 00:11:06.234 fused_ordering(232) 00:11:06.234 fused_ordering(233) 00:11:06.234 fused_ordering(234) 00:11:06.234 fused_ordering(235) 00:11:06.234 fused_ordering(236) 00:11:06.234 fused_ordering(237) 00:11:06.234 fused_ordering(238) 00:11:06.234 fused_ordering(239) 00:11:06.234 fused_ordering(240) 00:11:06.234 fused_ordering(241) 00:11:06.234 fused_ordering(242) 00:11:06.234 fused_ordering(243) 00:11:06.234 fused_ordering(244) 00:11:06.234 fused_ordering(245) 00:11:06.234 fused_ordering(246) 00:11:06.234 fused_ordering(247) 00:11:06.234 fused_ordering(248) 00:11:06.234 fused_ordering(249) 00:11:06.234 fused_ordering(250) 00:11:06.234 fused_ordering(251) 00:11:06.234 fused_ordering(252) 00:11:06.234 fused_ordering(253) 00:11:06.234 fused_ordering(254) 00:11:06.234 fused_ordering(255) 00:11:06.234 fused_ordering(256) 00:11:06.234 fused_ordering(257) 00:11:06.234 fused_ordering(258) 00:11:06.234 fused_ordering(259) 00:11:06.234 fused_ordering(260) 00:11:06.234 fused_ordering(261) 00:11:06.234 fused_ordering(262) 00:11:06.234 fused_ordering(263) 00:11:06.234 fused_ordering(264) 00:11:06.234 fused_ordering(265) 00:11:06.234 fused_ordering(266) 00:11:06.234 fused_ordering(267) 00:11:06.234 fused_ordering(268) 00:11:06.234 fused_ordering(269) 00:11:06.234 fused_ordering(270) 00:11:06.234 fused_ordering(271) 00:11:06.234 fused_ordering(272) 00:11:06.234 fused_ordering(273) 00:11:06.234 fused_ordering(274) 00:11:06.234 fused_ordering(275) 00:11:06.234 fused_ordering(276) 00:11:06.234 fused_ordering(277) 00:11:06.234 fused_ordering(278) 00:11:06.234 fused_ordering(279) 00:11:06.234 fused_ordering(280) 00:11:06.234 fused_ordering(281) 00:11:06.234 fused_ordering(282) 00:11:06.234 fused_ordering(283) 00:11:06.234 fused_ordering(284) 00:11:06.234 fused_ordering(285) 00:11:06.234 fused_ordering(286) 00:11:06.234 fused_ordering(287) 00:11:06.234 fused_ordering(288) 00:11:06.234 fused_ordering(289) 00:11:06.234 fused_ordering(290) 00:11:06.234 fused_ordering(291) 00:11:06.234 fused_ordering(292) 00:11:06.234 fused_ordering(293) 00:11:06.234 fused_ordering(294) 00:11:06.234 fused_ordering(295) 00:11:06.234 fused_ordering(296) 00:11:06.234 fused_ordering(297) 00:11:06.234 fused_ordering(298) 00:11:06.234 fused_ordering(299) 00:11:06.234 fused_ordering(300) 00:11:06.234 fused_ordering(301) 00:11:06.234 fused_ordering(302) 00:11:06.234 fused_ordering(303) 00:11:06.234 fused_ordering(304) 00:11:06.234 fused_ordering(305) 00:11:06.234 fused_ordering(306) 00:11:06.234 fused_ordering(307) 00:11:06.234 fused_ordering(308) 00:11:06.234 fused_ordering(309) 00:11:06.234 fused_ordering(310) 00:11:06.234 fused_ordering(311) 00:11:06.234 fused_ordering(312) 00:11:06.234 fused_ordering(313) 00:11:06.234 fused_ordering(314) 00:11:06.234 fused_ordering(315) 00:11:06.234 fused_ordering(316) 00:11:06.234 fused_ordering(317) 00:11:06.234 fused_ordering(318) 00:11:06.234 fused_ordering(319) 00:11:06.234 fused_ordering(320) 00:11:06.234 fused_ordering(321) 00:11:06.234 fused_ordering(322) 00:11:06.234 fused_ordering(323) 00:11:06.234 fused_ordering(324) 00:11:06.234 fused_ordering(325) 00:11:06.234 fused_ordering(326) 00:11:06.234 fused_ordering(327) 00:11:06.234 fused_ordering(328) 00:11:06.234 fused_ordering(329) 00:11:06.234 fused_ordering(330) 00:11:06.234 fused_ordering(331) 00:11:06.234 fused_ordering(332) 00:11:06.234 fused_ordering(333) 00:11:06.234 fused_ordering(334) 00:11:06.234 fused_ordering(335) 00:11:06.234 fused_ordering(336) 00:11:06.234 fused_ordering(337) 00:11:06.234 fused_ordering(338) 00:11:06.234 fused_ordering(339) 00:11:06.234 fused_ordering(340) 00:11:06.234 fused_ordering(341) 00:11:06.234 fused_ordering(342) 00:11:06.234 fused_ordering(343) 00:11:06.234 fused_ordering(344) 00:11:06.234 fused_ordering(345) 00:11:06.234 fused_ordering(346) 00:11:06.234 fused_ordering(347) 00:11:06.234 fused_ordering(348) 00:11:06.234 fused_ordering(349) 00:11:06.234 fused_ordering(350) 00:11:06.234 fused_ordering(351) 00:11:06.234 fused_ordering(352) 00:11:06.234 fused_ordering(353) 00:11:06.234 fused_ordering(354) 00:11:06.235 fused_ordering(355) 00:11:06.235 fused_ordering(356) 00:11:06.235 fused_ordering(357) 00:11:06.235 fused_ordering(358) 00:11:06.235 fused_ordering(359) 00:11:06.235 fused_ordering(360) 00:11:06.235 fused_ordering(361) 00:11:06.235 fused_ordering(362) 00:11:06.235 fused_ordering(363) 00:11:06.235 fused_ordering(364) 00:11:06.235 fused_ordering(365) 00:11:06.235 fused_ordering(366) 00:11:06.235 fused_ordering(367) 00:11:06.235 fused_ordering(368) 00:11:06.235 fused_ordering(369) 00:11:06.235 fused_ordering(370) 00:11:06.235 fused_ordering(371) 00:11:06.235 fused_ordering(372) 00:11:06.235 fused_ordering(373) 00:11:06.235 fused_ordering(374) 00:11:06.235 fused_ordering(375) 00:11:06.235 fused_ordering(376) 00:11:06.235 fused_ordering(377) 00:11:06.235 fused_ordering(378) 00:11:06.235 fused_ordering(379) 00:11:06.235 fused_ordering(380) 00:11:06.235 fused_ordering(381) 00:11:06.235 fused_ordering(382) 00:11:06.235 fused_ordering(383) 00:11:06.235 fused_ordering(384) 00:11:06.235 fused_ordering(385) 00:11:06.235 fused_ordering(386) 00:11:06.235 fused_ordering(387) 00:11:06.235 fused_ordering(388) 00:11:06.235 fused_ordering(389) 00:11:06.235 fused_ordering(390) 00:11:06.235 fused_ordering(391) 00:11:06.235 fused_ordering(392) 00:11:06.235 fused_ordering(393) 00:11:06.235 fused_ordering(394) 00:11:06.235 fused_ordering(395) 00:11:06.235 fused_ordering(396) 00:11:06.235 fused_ordering(397) 00:11:06.235 fused_ordering(398) 00:11:06.235 fused_ordering(399) 00:11:06.235 fused_ordering(400) 00:11:06.235 fused_ordering(401) 00:11:06.235 fused_ordering(402) 00:11:06.235 fused_ordering(403) 00:11:06.235 fused_ordering(404) 00:11:06.235 fused_ordering(405) 00:11:06.235 fused_ordering(406) 00:11:06.235 fused_ordering(407) 00:11:06.235 fused_ordering(408) 00:11:06.235 fused_ordering(409) 00:11:06.235 fused_ordering(410) 00:11:07.168 fused_ordering(411) 00:11:07.168 fused_ordering(412) 00:11:07.168 fused_ordering(413) 00:11:07.168 fused_ordering(414) 00:11:07.168 fused_ordering(415) 00:11:07.168 fused_ordering(416) 00:11:07.168 fused_ordering(417) 00:11:07.168 fused_ordering(418) 00:11:07.168 fused_ordering(419) 00:11:07.168 fused_ordering(420) 00:11:07.168 fused_ordering(421) 00:11:07.168 fused_ordering(422) 00:11:07.168 fused_ordering(423) 00:11:07.168 fused_ordering(424) 00:11:07.168 fused_ordering(425) 00:11:07.168 fused_ordering(426) 00:11:07.168 fused_ordering(427) 00:11:07.168 fused_ordering(428) 00:11:07.168 fused_ordering(429) 00:11:07.168 fused_ordering(430) 00:11:07.168 fused_ordering(431) 00:11:07.168 fused_ordering(432) 00:11:07.168 fused_ordering(433) 00:11:07.168 fused_ordering(434) 00:11:07.168 fused_ordering(435) 00:11:07.168 fused_ordering(436) 00:11:07.168 fused_ordering(437) 00:11:07.168 fused_ordering(438) 00:11:07.168 fused_ordering(439) 00:11:07.168 fused_ordering(440) 00:11:07.168 fused_ordering(441) 00:11:07.168 fused_ordering(442) 00:11:07.168 fused_ordering(443) 00:11:07.168 fused_ordering(444) 00:11:07.168 fused_ordering(445) 00:11:07.168 fused_ordering(446) 00:11:07.168 fused_ordering(447) 00:11:07.168 fused_ordering(448) 00:11:07.168 fused_ordering(449) 00:11:07.168 fused_ordering(450) 00:11:07.168 fused_ordering(451) 00:11:07.168 fused_ordering(452) 00:11:07.168 fused_ordering(453) 00:11:07.168 fused_ordering(454) 00:11:07.168 fused_ordering(455) 00:11:07.168 fused_ordering(456) 00:11:07.168 fused_ordering(457) 00:11:07.168 fused_ordering(458) 00:11:07.168 fused_ordering(459) 00:11:07.168 fused_ordering(460) 00:11:07.168 fused_ordering(461) 00:11:07.168 fused_ordering(462) 00:11:07.168 fused_ordering(463) 00:11:07.168 fused_ordering(464) 00:11:07.168 fused_ordering(465) 00:11:07.168 fused_ordering(466) 00:11:07.168 fused_ordering(467) 00:11:07.168 fused_ordering(468) 00:11:07.168 fused_ordering(469) 00:11:07.168 fused_ordering(470) 00:11:07.168 fused_ordering(471) 00:11:07.168 fused_ordering(472) 00:11:07.168 fused_ordering(473) 00:11:07.168 fused_ordering(474) 00:11:07.168 fused_ordering(475) 00:11:07.168 fused_ordering(476) 00:11:07.168 fused_ordering(477) 00:11:07.168 fused_ordering(478) 00:11:07.168 fused_ordering(479) 00:11:07.168 fused_ordering(480) 00:11:07.168 fused_ordering(481) 00:11:07.168 fused_ordering(482) 00:11:07.168 fused_ordering(483) 00:11:07.168 fused_ordering(484) 00:11:07.168 fused_ordering(485) 00:11:07.168 fused_ordering(486) 00:11:07.168 fused_ordering(487) 00:11:07.168 fused_ordering(488) 00:11:07.168 fused_ordering(489) 00:11:07.168 fused_ordering(490) 00:11:07.168 fused_ordering(491) 00:11:07.168 fused_ordering(492) 00:11:07.168 fused_ordering(493) 00:11:07.168 fused_ordering(494) 00:11:07.168 fused_ordering(495) 00:11:07.168 fused_ordering(496) 00:11:07.168 fused_ordering(497) 00:11:07.168 fused_ordering(498) 00:11:07.168 fused_ordering(499) 00:11:07.168 fused_ordering(500) 00:11:07.168 fused_ordering(501) 00:11:07.168 fused_ordering(502) 00:11:07.168 fused_ordering(503) 00:11:07.168 fused_ordering(504) 00:11:07.168 fused_ordering(505) 00:11:07.168 fused_ordering(506) 00:11:07.168 fused_ordering(507) 00:11:07.168 fused_ordering(508) 00:11:07.168 fused_ordering(509) 00:11:07.168 fused_ordering(510) 00:11:07.168 fused_ordering(511) 00:11:07.168 fused_ordering(512) 00:11:07.168 fused_ordering(513) 00:11:07.168 fused_ordering(514) 00:11:07.168 fused_ordering(515) 00:11:07.168 fused_ordering(516) 00:11:07.168 fused_ordering(517) 00:11:07.168 fused_ordering(518) 00:11:07.168 fused_ordering(519) 00:11:07.168 fused_ordering(520) 00:11:07.168 fused_ordering(521) 00:11:07.168 fused_ordering(522) 00:11:07.168 fused_ordering(523) 00:11:07.168 fused_ordering(524) 00:11:07.168 fused_ordering(525) 00:11:07.168 fused_ordering(526) 00:11:07.168 fused_ordering(527) 00:11:07.168 fused_ordering(528) 00:11:07.168 fused_ordering(529) 00:11:07.169 fused_ordering(530) 00:11:07.169 fused_ordering(531) 00:11:07.169 fused_ordering(532) 00:11:07.169 fused_ordering(533) 00:11:07.169 fused_ordering(534) 00:11:07.169 fused_ordering(535) 00:11:07.169 fused_ordering(536) 00:11:07.169 fused_ordering(537) 00:11:07.169 fused_ordering(538) 00:11:07.169 fused_ordering(539) 00:11:07.169 fused_ordering(540) 00:11:07.169 fused_ordering(541) 00:11:07.169 fused_ordering(542) 00:11:07.169 fused_ordering(543) 00:11:07.169 fused_ordering(544) 00:11:07.169 fused_ordering(545) 00:11:07.169 fused_ordering(546) 00:11:07.169 fused_ordering(547) 00:11:07.169 fused_ordering(548) 00:11:07.169 fused_ordering(549) 00:11:07.169 fused_ordering(550) 00:11:07.169 fused_ordering(551) 00:11:07.169 fused_ordering(552) 00:11:07.169 fused_ordering(553) 00:11:07.169 fused_ordering(554) 00:11:07.169 fused_ordering(555) 00:11:07.169 fused_ordering(556) 00:11:07.169 fused_ordering(557) 00:11:07.169 fused_ordering(558) 00:11:07.169 fused_ordering(559) 00:11:07.169 fused_ordering(560) 00:11:07.169 fused_ordering(561) 00:11:07.169 fused_ordering(562) 00:11:07.169 fused_ordering(563) 00:11:07.169 fused_ordering(564) 00:11:07.169 fused_ordering(565) 00:11:07.169 fused_ordering(566) 00:11:07.169 fused_ordering(567) 00:11:07.169 fused_ordering(568) 00:11:07.169 fused_ordering(569) 00:11:07.169 fused_ordering(570) 00:11:07.169 fused_ordering(571) 00:11:07.169 fused_ordering(572) 00:11:07.169 fused_ordering(573) 00:11:07.169 fused_ordering(574) 00:11:07.169 fused_ordering(575) 00:11:07.169 fused_ordering(576) 00:11:07.169 fused_ordering(577) 00:11:07.169 fused_ordering(578) 00:11:07.169 fused_ordering(579) 00:11:07.169 fused_ordering(580) 00:11:07.169 fused_ordering(581) 00:11:07.169 fused_ordering(582) 00:11:07.169 fused_ordering(583) 00:11:07.169 fused_ordering(584) 00:11:07.169 fused_ordering(585) 00:11:07.169 fused_ordering(586) 00:11:07.169 fused_ordering(587) 00:11:07.169 fused_ordering(588) 00:11:07.169 fused_ordering(589) 00:11:07.169 fused_ordering(590) 00:11:07.169 fused_ordering(591) 00:11:07.169 fused_ordering(592) 00:11:07.169 fused_ordering(593) 00:11:07.169 fused_ordering(594) 00:11:07.169 fused_ordering(595) 00:11:07.169 fused_ordering(596) 00:11:07.169 fused_ordering(597) 00:11:07.169 fused_ordering(598) 00:11:07.169 fused_ordering(599) 00:11:07.169 fused_ordering(600) 00:11:07.169 fused_ordering(601) 00:11:07.169 fused_ordering(602) 00:11:07.169 fused_ordering(603) 00:11:07.169 fused_ordering(604) 00:11:07.169 fused_ordering(605) 00:11:07.169 fused_ordering(606) 00:11:07.169 fused_ordering(607) 00:11:07.169 fused_ordering(608) 00:11:07.169 fused_ordering(609) 00:11:07.169 fused_ordering(610) 00:11:07.169 fused_ordering(611) 00:11:07.169 fused_ordering(612) 00:11:07.169 fused_ordering(613) 00:11:07.169 fused_ordering(614) 00:11:07.169 fused_ordering(615) 00:11:07.733 fused_ordering(616) 00:11:07.733 fused_ordering(617) 00:11:07.733 fused_ordering(618) 00:11:07.733 fused_ordering(619) 00:11:07.733 fused_ordering(620) 00:11:07.733 fused_ordering(621) 00:11:07.733 fused_ordering(622) 00:11:07.733 fused_ordering(623) 00:11:07.733 fused_ordering(624) 00:11:07.733 fused_ordering(625) 00:11:07.733 fused_ordering(626) 00:11:07.733 fused_ordering(627) 00:11:07.733 fused_ordering(628) 00:11:07.733 fused_ordering(629) 00:11:07.733 fused_ordering(630) 00:11:07.733 fused_ordering(631) 00:11:07.733 fused_ordering(632) 00:11:07.733 fused_ordering(633) 00:11:07.733 fused_ordering(634) 00:11:07.733 fused_ordering(635) 00:11:07.733 fused_ordering(636) 00:11:07.733 fused_ordering(637) 00:11:07.733 fused_ordering(638) 00:11:07.733 fused_ordering(639) 00:11:07.733 fused_ordering(640) 00:11:07.733 fused_ordering(641) 00:11:07.733 fused_ordering(642) 00:11:07.733 fused_ordering(643) 00:11:07.733 fused_ordering(644) 00:11:07.733 fused_ordering(645) 00:11:07.733 fused_ordering(646) 00:11:07.733 fused_ordering(647) 00:11:07.733 fused_ordering(648) 00:11:07.733 fused_ordering(649) 00:11:07.733 fused_ordering(650) 00:11:07.733 fused_ordering(651) 00:11:07.733 fused_ordering(652) 00:11:07.733 fused_ordering(653) 00:11:07.733 fused_ordering(654) 00:11:07.733 fused_ordering(655) 00:11:07.733 fused_ordering(656) 00:11:07.733 fused_ordering(657) 00:11:07.733 fused_ordering(658) 00:11:07.733 fused_ordering(659) 00:11:07.733 fused_ordering(660) 00:11:07.733 fused_ordering(661) 00:11:07.733 fused_ordering(662) 00:11:07.733 fused_ordering(663) 00:11:07.733 fused_ordering(664) 00:11:07.733 fused_ordering(665) 00:11:07.733 fused_ordering(666) 00:11:07.733 fused_ordering(667) 00:11:07.733 fused_ordering(668) 00:11:07.733 fused_ordering(669) 00:11:07.733 fused_ordering(670) 00:11:07.733 fused_ordering(671) 00:11:07.733 fused_ordering(672) 00:11:07.733 fused_ordering(673) 00:11:07.733 fused_ordering(674) 00:11:07.733 fused_ordering(675) 00:11:07.733 fused_ordering(676) 00:11:07.733 fused_ordering(677) 00:11:07.733 fused_ordering(678) 00:11:07.733 fused_ordering(679) 00:11:07.733 fused_ordering(680) 00:11:07.733 fused_ordering(681) 00:11:07.733 fused_ordering(682) 00:11:07.733 fused_ordering(683) 00:11:07.733 fused_ordering(684) 00:11:07.733 fused_ordering(685) 00:11:07.733 fused_ordering(686) 00:11:07.733 fused_ordering(687) 00:11:07.733 fused_ordering(688) 00:11:07.733 fused_ordering(689) 00:11:07.733 fused_ordering(690) 00:11:07.733 fused_ordering(691) 00:11:07.733 fused_ordering(692) 00:11:07.733 fused_ordering(693) 00:11:07.733 fused_ordering(694) 00:11:07.733 fused_ordering(695) 00:11:07.733 fused_ordering(696) 00:11:07.733 fused_ordering(697) 00:11:07.733 fused_ordering(698) 00:11:07.733 fused_ordering(699) 00:11:07.733 fused_ordering(700) 00:11:07.733 fused_ordering(701) 00:11:07.733 fused_ordering(702) 00:11:07.733 fused_ordering(703) 00:11:07.733 fused_ordering(704) 00:11:07.733 fused_ordering(705) 00:11:07.733 fused_ordering(706) 00:11:07.733 fused_ordering(707) 00:11:07.733 fused_ordering(708) 00:11:07.733 fused_ordering(709) 00:11:07.733 fused_ordering(710) 00:11:07.733 fused_ordering(711) 00:11:07.733 fused_ordering(712) 00:11:07.733 fused_ordering(713) 00:11:07.733 fused_ordering(714) 00:11:07.733 fused_ordering(715) 00:11:07.733 fused_ordering(716) 00:11:07.733 fused_ordering(717) 00:11:07.733 fused_ordering(718) 00:11:07.733 fused_ordering(719) 00:11:07.733 fused_ordering(720) 00:11:07.733 fused_ordering(721) 00:11:07.733 fused_ordering(722) 00:11:07.733 fused_ordering(723) 00:11:07.733 fused_ordering(724) 00:11:07.733 fused_ordering(725) 00:11:07.733 fused_ordering(726) 00:11:07.733 fused_ordering(727) 00:11:07.733 fused_ordering(728) 00:11:07.733 fused_ordering(729) 00:11:07.733 fused_ordering(730) 00:11:07.733 fused_ordering(731) 00:11:07.733 fused_ordering(732) 00:11:07.733 fused_ordering(733) 00:11:07.733 fused_ordering(734) 00:11:07.733 fused_ordering(735) 00:11:07.733 fused_ordering(736) 00:11:07.733 fused_ordering(737) 00:11:07.733 fused_ordering(738) 00:11:07.733 fused_ordering(739) 00:11:07.733 fused_ordering(740) 00:11:07.733 fused_ordering(741) 00:11:07.733 fused_ordering(742) 00:11:07.733 fused_ordering(743) 00:11:07.733 fused_ordering(744) 00:11:07.733 fused_ordering(745) 00:11:07.733 fused_ordering(746) 00:11:07.733 fused_ordering(747) 00:11:07.733 fused_ordering(748) 00:11:07.733 fused_ordering(749) 00:11:07.733 fused_ordering(750) 00:11:07.733 fused_ordering(751) 00:11:07.733 fused_ordering(752) 00:11:07.733 fused_ordering(753) 00:11:07.733 fused_ordering(754) 00:11:07.733 fused_ordering(755) 00:11:07.733 fused_ordering(756) 00:11:07.733 fused_ordering(757) 00:11:07.733 fused_ordering(758) 00:11:07.733 fused_ordering(759) 00:11:07.733 fused_ordering(760) 00:11:07.733 fused_ordering(761) 00:11:07.733 fused_ordering(762) 00:11:07.733 fused_ordering(763) 00:11:07.733 fused_ordering(764) 00:11:07.733 fused_ordering(765) 00:11:07.733 fused_ordering(766) 00:11:07.733 fused_ordering(767) 00:11:07.733 fused_ordering(768) 00:11:07.733 fused_ordering(769) 00:11:07.733 fused_ordering(770) 00:11:07.733 fused_ordering(771) 00:11:07.733 fused_ordering(772) 00:11:07.733 fused_ordering(773) 00:11:07.733 fused_ordering(774) 00:11:07.733 fused_ordering(775) 00:11:07.733 fused_ordering(776) 00:11:07.733 fused_ordering(777) 00:11:07.733 fused_ordering(778) 00:11:07.733 fused_ordering(779) 00:11:07.733 fused_ordering(780) 00:11:07.733 fused_ordering(781) 00:11:07.733 fused_ordering(782) 00:11:07.733 fused_ordering(783) 00:11:07.733 fused_ordering(784) 00:11:07.733 fused_ordering(785) 00:11:07.733 fused_ordering(786) 00:11:07.733 fused_ordering(787) 00:11:07.733 fused_ordering(788) 00:11:07.733 fused_ordering(789) 00:11:07.733 fused_ordering(790) 00:11:07.733 fused_ordering(791) 00:11:07.733 fused_ordering(792) 00:11:07.733 fused_ordering(793) 00:11:07.733 fused_ordering(794) 00:11:07.733 fused_ordering(795) 00:11:07.733 fused_ordering(796) 00:11:07.733 fused_ordering(797) 00:11:07.733 fused_ordering(798) 00:11:07.733 fused_ordering(799) 00:11:07.733 fused_ordering(800) 00:11:07.733 fused_ordering(801) 00:11:07.733 fused_ordering(802) 00:11:07.733 fused_ordering(803) 00:11:07.733 fused_ordering(804) 00:11:07.733 fused_ordering(805) 00:11:07.733 fused_ordering(806) 00:11:07.733 fused_ordering(807) 00:11:07.733 fused_ordering(808) 00:11:07.733 fused_ordering(809) 00:11:07.733 fused_ordering(810) 00:11:07.733 fused_ordering(811) 00:11:07.733 fused_ordering(812) 00:11:07.733 fused_ordering(813) 00:11:07.733 fused_ordering(814) 00:11:07.733 fused_ordering(815) 00:11:07.733 fused_ordering(816) 00:11:07.733 fused_ordering(817) 00:11:07.733 fused_ordering(818) 00:11:07.733 fused_ordering(819) 00:11:07.733 fused_ordering(820) 00:11:08.665 fused_ordering(821) 00:11:08.665 fused_ordering(822) 00:11:08.665 fused_ordering(823) 00:11:08.665 fused_ordering(824) 00:11:08.665 fused_ordering(825) 00:11:08.665 fused_ordering(826) 00:11:08.665 fused_ordering(827) 00:11:08.665 fused_ordering(828) 00:11:08.665 fused_ordering(829) 00:11:08.665 fused_ordering(830) 00:11:08.665 fused_ordering(831) 00:11:08.665 fused_ordering(832) 00:11:08.665 fused_ordering(833) 00:11:08.665 fused_ordering(834) 00:11:08.665 fused_ordering(835) 00:11:08.665 fused_ordering(836) 00:11:08.665 fused_ordering(837) 00:11:08.665 fused_ordering(838) 00:11:08.665 fused_ordering(839) 00:11:08.665 fused_ordering(840) 00:11:08.665 fused_ordering(841) 00:11:08.665 fused_ordering(842) 00:11:08.665 fused_ordering(843) 00:11:08.665 fused_ordering(844) 00:11:08.665 fused_ordering(845) 00:11:08.665 fused_ordering(846) 00:11:08.665 fused_ordering(847) 00:11:08.665 fused_ordering(848) 00:11:08.665 fused_ordering(849) 00:11:08.665 fused_ordering(850) 00:11:08.665 fused_ordering(851) 00:11:08.665 fused_ordering(852) 00:11:08.665 fused_ordering(853) 00:11:08.665 fused_ordering(854) 00:11:08.665 fused_ordering(855) 00:11:08.665 fused_ordering(856) 00:11:08.665 fused_ordering(857) 00:11:08.665 fused_ordering(858) 00:11:08.665 fused_ordering(859) 00:11:08.665 fused_ordering(860) 00:11:08.665 fused_ordering(861) 00:11:08.665 fused_ordering(862) 00:11:08.665 fused_ordering(863) 00:11:08.665 fused_ordering(864) 00:11:08.665 fused_ordering(865) 00:11:08.665 fused_ordering(866) 00:11:08.665 fused_ordering(867) 00:11:08.665 fused_ordering(868) 00:11:08.665 fused_ordering(869) 00:11:08.665 fused_ordering(870) 00:11:08.665 fused_ordering(871) 00:11:08.665 fused_ordering(872) 00:11:08.665 fused_ordering(873) 00:11:08.665 fused_ordering(874) 00:11:08.665 fused_ordering(875) 00:11:08.665 fused_ordering(876) 00:11:08.665 fused_ordering(877) 00:11:08.665 fused_ordering(878) 00:11:08.665 fused_ordering(879) 00:11:08.665 fused_ordering(880) 00:11:08.665 fused_ordering(881) 00:11:08.665 fused_ordering(882) 00:11:08.665 fused_ordering(883) 00:11:08.665 fused_ordering(884) 00:11:08.665 fused_ordering(885) 00:11:08.666 fused_ordering(886) 00:11:08.666 fused_ordering(887) 00:11:08.666 fused_ordering(888) 00:11:08.666 fused_ordering(889) 00:11:08.666 fused_ordering(890) 00:11:08.666 fused_ordering(891) 00:11:08.666 fused_ordering(892) 00:11:08.666 fused_ordering(893) 00:11:08.666 fused_ordering(894) 00:11:08.666 fused_ordering(895) 00:11:08.666 fused_ordering(896) 00:11:08.666 fused_ordering(897) 00:11:08.666 fused_ordering(898) 00:11:08.666 fused_ordering(899) 00:11:08.666 fused_ordering(900) 00:11:08.666 fused_ordering(901) 00:11:08.666 fused_ordering(902) 00:11:08.666 fused_ordering(903) 00:11:08.666 fused_ordering(904) 00:11:08.666 fused_ordering(905) 00:11:08.666 fused_ordering(906) 00:11:08.666 fused_ordering(907) 00:11:08.666 fused_ordering(908) 00:11:08.666 fused_ordering(909) 00:11:08.666 fused_ordering(910) 00:11:08.666 fused_ordering(911) 00:11:08.666 fused_ordering(912) 00:11:08.666 fused_ordering(913) 00:11:08.666 fused_ordering(914) 00:11:08.666 fused_ordering(915) 00:11:08.666 fused_ordering(916) 00:11:08.666 fused_ordering(917) 00:11:08.666 fused_ordering(918) 00:11:08.666 fused_ordering(919) 00:11:08.666 fused_ordering(920) 00:11:08.666 fused_ordering(921) 00:11:08.666 fused_ordering(922) 00:11:08.666 fused_ordering(923) 00:11:08.666 fused_ordering(924) 00:11:08.666 fused_ordering(925) 00:11:08.666 fused_ordering(926) 00:11:08.666 fused_ordering(927) 00:11:08.666 fused_ordering(928) 00:11:08.666 fused_ordering(929) 00:11:08.666 fused_ordering(930) 00:11:08.666 fused_ordering(931) 00:11:08.666 fused_ordering(932) 00:11:08.666 fused_ordering(933) 00:11:08.666 fused_ordering(934) 00:11:08.666 fused_ordering(935) 00:11:08.666 fused_ordering(936) 00:11:08.666 fused_ordering(937) 00:11:08.666 fused_ordering(938) 00:11:08.666 fused_ordering(939) 00:11:08.666 fused_ordering(940) 00:11:08.666 fused_ordering(941) 00:11:08.666 fused_ordering(942) 00:11:08.666 fused_ordering(943) 00:11:08.666 fused_ordering(944) 00:11:08.666 fused_ordering(945) 00:11:08.666 fused_ordering(946) 00:11:08.666 fused_ordering(947) 00:11:08.666 fused_ordering(948) 00:11:08.666 fused_ordering(949) 00:11:08.666 fused_ordering(950) 00:11:08.666 fused_ordering(951) 00:11:08.666 fused_ordering(952) 00:11:08.666 fused_ordering(953) 00:11:08.666 fused_ordering(954) 00:11:08.666 fused_ordering(955) 00:11:08.666 fused_ordering(956) 00:11:08.666 fused_ordering(957) 00:11:08.666 fused_ordering(958) 00:11:08.666 fused_ordering(959) 00:11:08.666 fused_ordering(960) 00:11:08.666 fused_ordering(961) 00:11:08.666 fused_ordering(962) 00:11:08.666 fused_ordering(963) 00:11:08.666 fused_ordering(964) 00:11:08.666 fused_ordering(965) 00:11:08.666 fused_ordering(966) 00:11:08.666 fused_ordering(967) 00:11:08.666 fused_ordering(968) 00:11:08.666 fused_ordering(969) 00:11:08.666 fused_ordering(970) 00:11:08.666 fused_ordering(971) 00:11:08.666 fused_ordering(972) 00:11:08.666 fused_ordering(973) 00:11:08.666 fused_ordering(974) 00:11:08.666 fused_ordering(975) 00:11:08.666 fused_ordering(976) 00:11:08.666 fused_ordering(977) 00:11:08.666 fused_ordering(978) 00:11:08.666 fused_ordering(979) 00:11:08.666 fused_ordering(980) 00:11:08.666 fused_ordering(981) 00:11:08.666 fused_ordering(982) 00:11:08.666 fused_ordering(983) 00:11:08.666 fused_ordering(984) 00:11:08.666 fused_ordering(985) 00:11:08.666 fused_ordering(986) 00:11:08.666 fused_ordering(987) 00:11:08.666 fused_ordering(988) 00:11:08.666 fused_ordering(989) 00:11:08.666 fused_ordering(990) 00:11:08.666 fused_ordering(991) 00:11:08.666 fused_ordering(992) 00:11:08.666 fused_ordering(993) 00:11:08.666 fused_ordering(994) 00:11:08.666 fused_ordering(995) 00:11:08.666 fused_ordering(996) 00:11:08.666 fused_ordering(997) 00:11:08.666 fused_ordering(998) 00:11:08.666 fused_ordering(999) 00:11:08.666 fused_ordering(1000) 00:11:08.666 fused_ordering(1001) 00:11:08.666 fused_ordering(1002) 00:11:08.666 fused_ordering(1003) 00:11:08.666 fused_ordering(1004) 00:11:08.666 fused_ordering(1005) 00:11:08.666 fused_ordering(1006) 00:11:08.666 fused_ordering(1007) 00:11:08.666 fused_ordering(1008) 00:11:08.666 fused_ordering(1009) 00:11:08.666 fused_ordering(1010) 00:11:08.666 fused_ordering(1011) 00:11:08.666 fused_ordering(1012) 00:11:08.666 fused_ordering(1013) 00:11:08.666 fused_ordering(1014) 00:11:08.666 fused_ordering(1015) 00:11:08.666 fused_ordering(1016) 00:11:08.666 fused_ordering(1017) 00:11:08.666 fused_ordering(1018) 00:11:08.666 fused_ordering(1019) 00:11:08.666 fused_ordering(1020) 00:11:08.666 fused_ordering(1021) 00:11:08.666 fused_ordering(1022) 00:11:08.666 fused_ordering(1023) 00:11:08.666 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:08.666 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:08.666 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:08.666 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:08.666 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:08.666 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:08.666 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:08.666 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:08.666 rmmod nvme_tcp 00:11:08.666 rmmod nvme_fabrics 00:11:08.666 rmmod nvme_keyring 00:11:08.666 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:08.666 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:08.666 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:08.666 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3115926 ']' 00:11:08.666 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3115926 00:11:08.666 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 3115926 ']' 00:11:08.666 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 3115926 00:11:08.666 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:11:08.666 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:08.666 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3115926 00:11:08.925 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:08.925 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:08.925 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3115926' 00:11:08.925 killing process with pid 3115926 00:11:08.925 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 3115926 00:11:08.925 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 3115926 00:11:09.182 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:09.182 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:09.182 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:09.182 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:09.182 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:09.182 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.182 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.182 18:53:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.087 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:11.087 00:11:11.087 real 0m8.817s 00:11:11.087 user 0m5.905s 00:11:11.087 sys 0m3.784s 00:11:11.087 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:11.087 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:11.088 ************************************ 00:11:11.088 END TEST nvmf_fused_ordering 00:11:11.088 ************************************ 00:11:11.088 18:53:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:11.088 18:53:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:11.088 18:53:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.088 18:53:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:11.088 ************************************ 00:11:11.088 START TEST nvmf_ns_masking 00:11:11.088 ************************************ 00:11:11.088 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:11.346 * Looking for test storage... 00:11:11.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=e80c2f24-d275-4ce7-b72f-c264d460fc40 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a45de1f0-6b4d-4182-944f-873fdbe21348 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=dbb17723-5ca7-4de1-ba6f-d64f9fab6b64 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:11.346 18:53:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:13.250 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:13.250 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:13.250 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:13.250 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:13.250 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:13.250 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:13.250 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:13.250 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:13.250 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:13.250 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:13.250 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:13.250 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:13.250 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:13.250 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:13.251 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:13.251 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:13.251 Found net devices under 0000:09:00.0: cvl_0_0 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:13.251 Found net devices under 0000:09:00.1: cvl_0_1 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:13.251 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:13.548 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:13.548 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:13.548 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:13.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:13.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:11:13.549 00:11:13.549 --- 10.0.0.2 ping statistics --- 00:11:13.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.549 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:13.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:13.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:11:13.549 00:11:13.549 --- 10.0.0.1 ping statistics --- 00:11:13.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.549 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3118363 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3118363 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3118363 ']' 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:13.549 18:53:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:13.549 [2024-07-24 18:53:51.017583] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:11:13.549 [2024-07-24 18:53:51.017671] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.549 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.549 [2024-07-24 18:53:51.081711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.808 [2024-07-24 18:53:51.190641] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.808 [2024-07-24 18:53:51.190710] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.808 [2024-07-24 18:53:51.190737] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.808 [2024-07-24 18:53:51.190748] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.808 [2024-07-24 18:53:51.190758] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.808 [2024-07-24 18:53:51.190793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.808 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:13.808 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:11:13.808 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:13.808 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:13.808 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:13.808 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.808 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:14.066 [2024-07-24 18:53:51.560112] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.066 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:14.066 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:14.066 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:14.324 Malloc1 00:11:14.324 18:53:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:14.582 Malloc2 00:11:14.864 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:14.864 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:15.122 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.379 [2024-07-24 18:53:52.903862] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.379 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:15.379 18:53:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dbb17723-5ca7-4de1-ba6f-d64f9fab6b64 -a 10.0.0.2 -s 4420 -i 4 00:11:15.636 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:15.636 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:15.636 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:15.636 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:15.636 18:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:17.531 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:17.531 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:17.531 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:17.531 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:17.531 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:17.531 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:17.531 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:17.531 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:17.789 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:17.789 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:17.789 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:17.789 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:17.789 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:17.789 [ 0]:0x1 00:11:17.789 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:17.789 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:17.789 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49557e1b05834b609b4f1d4e1cec5697 00:11:17.789 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49557e1b05834b609b4f1d4e1cec5697 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:17.789 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:18.046 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:18.046 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:18.046 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:18.046 [ 0]:0x1 00:11:18.046 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:18.046 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:18.046 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49557e1b05834b609b4f1d4e1cec5697 00:11:18.046 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49557e1b05834b609b4f1d4e1cec5697 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:18.046 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:18.046 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:18.046 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:18.046 [ 1]:0x2 00:11:18.046 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:18.046 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:18.046 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=866f2d31f023481ea4b1701569283b89 00:11:18.046 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 866f2d31f023481ea4b1701569283b89 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:18.046 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:18.046 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:18.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.303 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.560 18:53:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:18.818 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:18.818 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dbb17723-5ca7-4de1-ba6f-d64f9fab6b64 -a 10.0.0.2 -s 4420 -i 4 00:11:18.818 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:18.818 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:18.818 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.818 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:18.818 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:18.818 18:53:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:21.343 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:21.343 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:21.343 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:21.343 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:21.343 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:21.343 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:21.343 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:21.343 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:21.343 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:21.343 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:21.343 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:21.343 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:21.343 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:21.344 [ 0]:0x2 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=866f2d31f023481ea4b1701569283b89 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 866f2d31f023481ea4b1701569283b89 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:21.344 [ 0]:0x1 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49557e1b05834b609b4f1d4e1cec5697 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49557e1b05834b609b4f1d4e1cec5697 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:21.344 [ 1]:0x2 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:21.344 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:21.602 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=866f2d31f023481ea4b1701569283b89 00:11:21.602 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 866f2d31f023481ea4b1701569283b89 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.602 18:53:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:21.602 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:21.602 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:21.602 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:21.602 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:21.602 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:21.602 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:21.602 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:21.602 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:21.602 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:21.602 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:21.860 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:21.860 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:21.860 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:21.860 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.860 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:21.860 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:21.860 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:21.860 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:21.860 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:21.860 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:21.861 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:21.861 [ 0]:0x2 00:11:21.861 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:21.861 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:21.861 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=866f2d31f023481ea4b1701569283b89 00:11:21.861 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 866f2d31f023481ea4b1701569283b89 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.861 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:21.861 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:21.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.861 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:22.118 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:22.118 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dbb17723-5ca7-4de1-ba6f-d64f9fab6b64 -a 10.0.0.2 -s 4420 -i 4 00:11:22.375 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:22.375 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:22.375 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:22.375 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:22.375 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:22.375 18:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:24.273 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:24.273 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:24.273 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:24.273 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:24.273 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:24.273 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:24.273 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:24.273 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:24.273 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:24.273 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:24.273 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:24.273 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:24.273 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:24.273 [ 0]:0x1 00:11:24.273 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:24.273 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:24.530 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49557e1b05834b609b4f1d4e1cec5697 00:11:24.530 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49557e1b05834b609b4f1d4e1cec5697 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:24.530 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:24.530 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:24.530 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:24.530 [ 1]:0x2 00:11:24.530 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:24.530 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:24.530 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=866f2d31f023481ea4b1701569283b89 00:11:24.530 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 866f2d31f023481ea4b1701569283b89 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:24.530 18:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:24.788 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:24.788 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:24.788 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:24.788 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:24.788 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:24.788 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:24.788 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:24.788 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:24.788 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:24.788 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:24.788 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:24.788 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:24.788 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:24.788 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:24.788 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:24.788 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:24.788 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:24.788 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:24.788 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:24.788 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:24.789 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:24.789 [ 0]:0x2 00:11:24.789 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:24.789 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:24.789 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=866f2d31f023481ea4b1701569283b89 00:11:24.789 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 866f2d31f023481ea4b1701569283b89 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:24.789 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:24.789 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:24.789 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:24.789 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:24.789 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:24.789 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:24.789 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:24.789 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:24.789 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:24.789 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:24.789 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:24.789 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:25.047 [2024-07-24 18:54:02.529589] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:25.047 request: 00:11:25.047 { 00:11:25.047 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:25.047 "nsid": 2, 00:11:25.047 "host": "nqn.2016-06.io.spdk:host1", 00:11:25.047 "method": "nvmf_ns_remove_host", 00:11:25.047 "req_id": 1 00:11:25.047 } 00:11:25.047 Got JSON-RPC error response 00:11:25.047 response: 00:11:25.047 { 00:11:25.047 "code": -32602, 00:11:25.047 "message": "Invalid parameters" 00:11:25.047 } 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:25.047 [ 0]:0x2 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=866f2d31f023481ea4b1701569283b89 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 866f2d31f023481ea4b1701569283b89 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:25.047 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.304 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3120094 00:11:25.304 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:25.304 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.304 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3120094 /var/tmp/host.sock 00:11:25.304 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3120094 ']' 00:11:25.304 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:11:25.304 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:25.304 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:25.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:25.304 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:25.304 18:54:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:25.304 [2024-07-24 18:54:02.814900] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:11:25.304 [2024-07-24 18:54:02.814996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3120094 ] 00:11:25.304 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.304 [2024-07-24 18:54:02.878336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.561 [2024-07-24 18:54:02.999823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.492 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:26.492 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:11:26.492 18:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.492 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:26.749 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid e80c2f24-d275-4ce7-b72f-c264d460fc40 00:11:26.749 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:26.749 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E80C2F24D2754CE7B72FC264D460FC40 -i 00:11:27.007 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a45de1f0-6b4d-4182-944f-873fdbe21348 00:11:27.007 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:27.007 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A45DE1F06B4D4182944F873FDBE21348 -i 00:11:27.264 18:54:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:27.831 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:27.831 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:27.831 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:28.396 nvme0n1 00:11:28.396 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:28.396 18:54:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:28.961 nvme1n2 00:11:28.961 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:28.961 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:28.961 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:28.961 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:28.961 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:28.961 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:28.961 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:28.961 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:28.961 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:29.232 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ e80c2f24-d275-4ce7-b72f-c264d460fc40 == \e\8\0\c\2\f\2\4\-\d\2\7\5\-\4\c\e\7\-\b\7\2\f\-\c\2\6\4\d\4\6\0\f\c\4\0 ]] 00:11:29.232 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:29.232 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:29.232 18:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:29.496 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a45de1f0-6b4d-4182-944f-873fdbe21348 == \a\4\5\d\e\1\f\0\-\6\b\4\d\-\4\1\8\2\-\9\4\4\f\-\8\7\3\f\d\b\e\2\1\3\4\8 ]] 00:11:29.496 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3120094 00:11:29.496 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3120094 ']' 00:11:29.496 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3120094 00:11:29.496 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:11:29.496 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:29.496 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3120094 00:11:29.496 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:29.496 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:29.496 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3120094' 00:11:29.496 killing process with pid 3120094 00:11:29.496 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3120094 00:11:29.496 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3120094 00:11:30.063 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:30.320 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:30.320 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:30.320 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:30.320 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:30.320 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:30.320 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:30.320 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:30.320 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:30.320 rmmod nvme_tcp 00:11:30.320 rmmod nvme_fabrics 00:11:30.320 rmmod nvme_keyring 00:11:30.320 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:30.320 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:30.320 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:30.320 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3118363 ']' 00:11:30.320 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3118363 00:11:30.320 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3118363 ']' 00:11:30.320 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3118363 00:11:30.320 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:11:30.320 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:30.321 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3118363 00:11:30.321 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:30.321 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:30.321 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3118363' 00:11:30.321 killing process with pid 3118363 00:11:30.321 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3118363 00:11:30.321 18:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3118363 00:11:30.886 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:30.886 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:30.886 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:30.886 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:30.886 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:30.886 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.886 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.886 18:54:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:32.789 00:11:32.789 real 0m21.577s 00:11:32.789 user 0m28.700s 00:11:32.789 sys 0m4.250s 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:32.789 ************************************ 00:11:32.789 END TEST nvmf_ns_masking 00:11:32.789 ************************************ 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:32.789 ************************************ 00:11:32.789 START TEST nvmf_nvme_cli 00:11:32.789 ************************************ 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:32.789 * Looking for test storage... 00:11:32.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:32.789 18:54:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:35.321 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:35.321 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:35.321 Found net devices under 0000:09:00.0: cvl_0_0 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:35.321 Found net devices under 0000:09:00.1: cvl_0_1 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:35.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:11:35.321 00:11:35.321 --- 10.0.0.2 ping statistics --- 00:11:35.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.321 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:11:35.321 00:11:35.321 --- 10.0.0.1 ping statistics --- 00:11:35.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.321 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.321 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:35.322 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:35.322 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.322 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:35.322 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:35.322 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.322 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:35.322 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:35.322 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:35.322 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:35.322 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:35.322 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:35.322 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3123104 00:11:35.322 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:35.322 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3123104 00:11:35.322 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 3123104 ']' 00:11:35.322 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.322 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:35.322 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.322 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:35.322 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:35.322 [2024-07-24 18:54:12.638438] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:11:35.322 [2024-07-24 18:54:12.638516] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.322 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.322 [2024-07-24 18:54:12.702142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.322 [2024-07-24 18:54:12.813600] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.322 [2024-07-24 18:54:12.813652] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.322 [2024-07-24 18:54:12.813665] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.322 [2024-07-24 18:54:12.813677] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.322 [2024-07-24 18:54:12.813687] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.322 [2024-07-24 18:54:12.813737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.322 [2024-07-24 18:54:12.813794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.322 [2024-07-24 18:54:12.813859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.322 [2024-07-24 18:54:12.813862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.580 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:35.580 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:11:35.580 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:35.580 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:35.580 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:35.580 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.580 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:35.580 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.580 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:35.580 [2024-07-24 18:54:12.970687] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.580 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.580 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:35.580 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.580 18:54:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:35.580 Malloc0 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:35.580 Malloc1 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:35.580 [2024-07-24 18:54:13.057437] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:11:35.580 00:11:35.580 Discovery Log Number of Records 2, Generation counter 2 00:11:35.580 =====Discovery Log Entry 0====== 00:11:35.580 trtype: tcp 00:11:35.580 adrfam: ipv4 00:11:35.580 subtype: current discovery subsystem 00:11:35.580 treq: not required 00:11:35.580 portid: 0 00:11:35.580 trsvcid: 4420 00:11:35.580 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:35.580 traddr: 10.0.0.2 00:11:35.580 eflags: explicit discovery connections, duplicate discovery information 00:11:35.580 sectype: none 00:11:35.580 =====Discovery Log Entry 1====== 00:11:35.580 trtype: tcp 00:11:35.580 adrfam: ipv4 00:11:35.580 subtype: nvme subsystem 00:11:35.580 treq: not required 00:11:35.580 portid: 0 00:11:35.580 trsvcid: 4420 00:11:35.580 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:35.580 traddr: 10.0.0.2 00:11:35.580 eflags: none 00:11:35.580 sectype: none 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:35.580 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:36.511 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:36.511 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:11:36.511 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:36.511 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:36.511 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:36.511 18:54:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:38.405 /dev/nvme0n1 ]] 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.405 18:54:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:38.663 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:38.663 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.663 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:38.663 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.663 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:38.663 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:38.663 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.663 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:38.663 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:38.663 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:38.663 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:38.663 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:38.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:38.921 rmmod nvme_tcp 00:11:38.921 rmmod nvme_fabrics 00:11:38.921 rmmod nvme_keyring 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3123104 ']' 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3123104 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 3123104 ']' 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 3123104 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3123104 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3123104' 00:11:38.921 killing process with pid 3123104 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 3123104 00:11:38.921 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 3123104 00:11:39.488 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:39.488 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:39.488 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:39.488 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:39.488 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:39.488 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.488 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.488 18:54:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:41.390 00:11:41.390 real 0m8.548s 00:11:41.390 user 0m16.177s 00:11:41.390 sys 0m2.285s 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:41.390 ************************************ 00:11:41.390 END TEST nvmf_nvme_cli 00:11:41.390 ************************************ 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:41.390 ************************************ 00:11:41.390 START TEST nvmf_vfio_user 00:11:41.390 ************************************ 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:41.390 * Looking for test storage... 00:11:41.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3124027 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3124027' 00:11:41.390 Process pid: 3124027 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3124027 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3124027 ']' 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:41.390 18:54:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:41.649 [2024-07-24 18:54:19.012271] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:11:41.649 [2024-07-24 18:54:19.012354] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.649 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.649 [2024-07-24 18:54:19.068626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.649 [2024-07-24 18:54:19.176119] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.649 [2024-07-24 18:54:19.176197] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.649 [2024-07-24 18:54:19.176236] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.649 [2024-07-24 18:54:19.176250] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.649 [2024-07-24 18:54:19.176261] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.649 [2024-07-24 18:54:19.176317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.649 [2024-07-24 18:54:19.176382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.649 [2024-07-24 18:54:19.176489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.649 [2024-07-24 18:54:19.176492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.906 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:41.906 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:11:41.906 18:54:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:42.842 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:43.099 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:43.099 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:43.099 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:43.099 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:43.099 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:43.357 Malloc1 00:11:43.357 18:54:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:43.614 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:43.870 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:44.127 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:44.127 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:44.127 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:44.385 Malloc2 00:11:44.385 18:54:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:44.642 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:44.899 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:45.157 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:45.157 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:45.157 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:45.157 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:45.157 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:45.157 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:45.157 [2024-07-24 18:54:22.676382] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:11:45.157 [2024-07-24 18:54:22.676424] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3124454 ] 00:11:45.157 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.157 [2024-07-24 18:54:22.709513] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:45.157 [2024-07-24 18:54:22.719668] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:45.157 [2024-07-24 18:54:22.719697] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fec078cb000 00:11:45.157 [2024-07-24 18:54:22.720666] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:45.157 [2024-07-24 18:54:22.721665] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:45.157 [2024-07-24 18:54:22.722672] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:45.157 [2024-07-24 18:54:22.723682] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:45.157 [2024-07-24 18:54:22.724682] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:45.157 [2024-07-24 18:54:22.725686] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:45.157 [2024-07-24 18:54:22.726694] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:45.157 [2024-07-24 18:54:22.727694] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:45.157 [2024-07-24 18:54:22.728705] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:45.157 [2024-07-24 18:54:22.728723] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fec078c0000 00:11:45.157 [2024-07-24 18:54:22.729838] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:45.157 [2024-07-24 18:54:22.744726] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:45.157 [2024-07-24 18:54:22.744762] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:45.157 [2024-07-24 18:54:22.749818] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:45.157 [2024-07-24 18:54:22.749877] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:45.157 [2024-07-24 18:54:22.749970] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:45.157 [2024-07-24 18:54:22.749996] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:45.157 [2024-07-24 18:54:22.750006] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:45.157 [2024-07-24 18:54:22.750804] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:45.157 [2024-07-24 18:54:22.750828] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:45.157 [2024-07-24 18:54:22.750842] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:45.157 [2024-07-24 18:54:22.751808] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:45.157 [2024-07-24 18:54:22.751826] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:45.157 [2024-07-24 18:54:22.751839] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:45.157 [2024-07-24 18:54:22.752811] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:45.157 [2024-07-24 18:54:22.752829] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:45.157 [2024-07-24 18:54:22.753819] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:45.157 [2024-07-24 18:54:22.753838] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:45.157 [2024-07-24 18:54:22.753847] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:45.157 [2024-07-24 18:54:22.753862] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:45.157 [2024-07-24 18:54:22.753972] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:45.157 [2024-07-24 18:54:22.753980] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:45.157 [2024-07-24 18:54:22.753989] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:45.157 [2024-07-24 18:54:22.755129] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:45.157 [2024-07-24 18:54:22.755833] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:45.157 [2024-07-24 18:54:22.756849] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:45.157 [2024-07-24 18:54:22.757829] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:45.157 [2024-07-24 18:54:22.757949] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:45.416 [2024-07-24 18:54:22.762125] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:45.416 [2024-07-24 18:54:22.762144] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:45.416 [2024-07-24 18:54:22.762153] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:45.416 [2024-07-24 18:54:22.762177] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:45.416 [2024-07-24 18:54:22.762191] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:45.416 [2024-07-24 18:54:22.762215] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:45.416 [2024-07-24 18:54:22.762224] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:45.416 [2024-07-24 18:54:22.762231] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:45.416 [2024-07-24 18:54:22.762248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:45.416 [2024-07-24 18:54:22.762309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:45.416 [2024-07-24 18:54:22.762324] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:45.416 [2024-07-24 18:54:22.762332] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:45.416 [2024-07-24 18:54:22.762340] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:45.416 [2024-07-24 18:54:22.762347] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:45.416 [2024-07-24 18:54:22.762355] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:45.416 [2024-07-24 18:54:22.762363] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:45.416 [2024-07-24 18:54:22.762371] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:45.416 [2024-07-24 18:54:22.762387] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:45.416 [2024-07-24 18:54:22.762421] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:45.416 [2024-07-24 18:54:22.762440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:45.416 [2024-07-24 18:54:22.762461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.416 [2024-07-24 18:54:22.762475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.416 [2024-07-24 18:54:22.762487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.416 [2024-07-24 18:54:22.762498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.416 [2024-07-24 18:54:22.762506] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:45.416 [2024-07-24 18:54:22.762521] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:45.416 [2024-07-24 18:54:22.762535] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:45.416 [2024-07-24 18:54:22.762546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:45.416 [2024-07-24 18:54:22.762556] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:45.416 [2024-07-24 18:54:22.762564] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:45.416 [2024-07-24 18:54:22.762578] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:45.416 [2024-07-24 18:54:22.762588] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:45.416 [2024-07-24 18:54:22.762600] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:45.416 [2024-07-24 18:54:22.762612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:45.416 [2024-07-24 18:54:22.762675] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:45.416 [2024-07-24 18:54:22.762690] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:45.416 [2024-07-24 18:54:22.762702] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:45.416 [2024-07-24 18:54:22.762710] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:45.416 [2024-07-24 18:54:22.762716] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:45.416 [2024-07-24 18:54:22.762725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:45.416 [2024-07-24 18:54:22.762741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:45.416 [2024-07-24 18:54:22.762760] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:45.416 [2024-07-24 18:54:22.762779] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:45.417 [2024-07-24 18:54:22.762793] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:45.417 [2024-07-24 18:54:22.762805] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:45.417 [2024-07-24 18:54:22.762813] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:45.417 [2024-07-24 18:54:22.762819] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:45.417 [2024-07-24 18:54:22.762828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:45.417 [2024-07-24 18:54:22.762853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:45.417 [2024-07-24 18:54:22.762873] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:45.417 [2024-07-24 18:54:22.762886] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:45.417 [2024-07-24 18:54:22.762898] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:45.417 [2024-07-24 18:54:22.762906] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:45.417 [2024-07-24 18:54:22.762912] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:45.417 [2024-07-24 18:54:22.762921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:45.417 [2024-07-24 18:54:22.762935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:45.417 [2024-07-24 18:54:22.762948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:45.417 [2024-07-24 18:54:22.762959] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:45.417 [2024-07-24 18:54:22.762972] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:45.417 [2024-07-24 18:54:22.762984] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:11:45.417 [2024-07-24 18:54:22.762993] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:45.417 [2024-07-24 18:54:22.763001] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:45.417 [2024-07-24 18:54:22.763009] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:45.417 [2024-07-24 18:54:22.763016] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:45.417 [2024-07-24 18:54:22.763024] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:45.417 [2024-07-24 18:54:22.763048] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:45.417 [2024-07-24 18:54:22.763066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:45.417 [2024-07-24 18:54:22.763111] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:45.417 [2024-07-24 18:54:22.763126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:45.417 [2024-07-24 18:54:22.763144] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:45.417 [2024-07-24 18:54:22.763156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:45.417 [2024-07-24 18:54:22.763173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:45.417 [2024-07-24 18:54:22.763185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:45.417 [2024-07-24 18:54:22.763209] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:45.417 [2024-07-24 18:54:22.763219] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:45.417 [2024-07-24 18:54:22.763226] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:45.417 [2024-07-24 18:54:22.763232] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:45.417 [2024-07-24 18:54:22.763238] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:11:45.417 [2024-07-24 18:54:22.763247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:45.417 [2024-07-24 18:54:22.763259] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:45.417 [2024-07-24 18:54:22.763268] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:45.417 [2024-07-24 18:54:22.763274] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:45.417 [2024-07-24 18:54:22.763283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:45.417 [2024-07-24 18:54:22.763294] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:45.417 [2024-07-24 18:54:22.763302] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:45.417 [2024-07-24 18:54:22.763308] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:45.417 [2024-07-24 18:54:22.763317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:45.417 [2024-07-24 18:54:22.763329] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:45.417 [2024-07-24 18:54:22.763337] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:45.417 [2024-07-24 18:54:22.763343] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:45.417 [2024-07-24 18:54:22.763352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:45.417 [2024-07-24 18:54:22.763364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:45.417 [2024-07-24 18:54:22.763385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:45.417 [2024-07-24 18:54:22.763423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:45.417 [2024-07-24 18:54:22.763435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:45.417 ===================================================== 00:11:45.417 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:45.417 ===================================================== 00:11:45.417 Controller Capabilities/Features 00:11:45.417 ================================ 00:11:45.417 Vendor ID: 4e58 00:11:45.417 Subsystem Vendor ID: 4e58 00:11:45.417 Serial Number: SPDK1 00:11:45.417 Model Number: SPDK bdev Controller 00:11:45.417 Firmware Version: 24.09 00:11:45.417 Recommended Arb Burst: 6 00:11:45.417 IEEE OUI Identifier: 8d 6b 50 00:11:45.417 Multi-path I/O 00:11:45.417 May have multiple subsystem ports: Yes 00:11:45.417 May have multiple controllers: Yes 00:11:45.417 Associated with SR-IOV VF: No 00:11:45.417 Max Data Transfer Size: 131072 00:11:45.417 Max Number of Namespaces: 32 00:11:45.417 Max Number of I/O Queues: 127 00:11:45.417 NVMe Specification Version (VS): 1.3 00:11:45.417 NVMe Specification Version (Identify): 1.3 00:11:45.417 Maximum Queue Entries: 256 00:11:45.417 Contiguous Queues Required: Yes 00:11:45.417 Arbitration Mechanisms Supported 00:11:45.417 Weighted Round Robin: Not Supported 00:11:45.417 Vendor Specific: Not Supported 00:11:45.417 Reset Timeout: 15000 ms 00:11:45.417 Doorbell Stride: 4 bytes 00:11:45.417 NVM Subsystem Reset: Not Supported 00:11:45.417 Command Sets Supported 00:11:45.417 NVM Command Set: Supported 00:11:45.417 Boot Partition: Not Supported 00:11:45.417 Memory Page Size Minimum: 4096 bytes 00:11:45.417 Memory Page Size Maximum: 4096 bytes 00:11:45.417 Persistent Memory Region: Not Supported 00:11:45.417 Optional Asynchronous Events Supported 00:11:45.417 Namespace Attribute Notices: Supported 00:11:45.417 Firmware Activation Notices: Not Supported 00:11:45.417 ANA Change Notices: Not Supported 00:11:45.417 PLE Aggregate Log Change Notices: Not Supported 00:11:45.417 LBA Status Info Alert Notices: Not Supported 00:11:45.417 EGE Aggregate Log Change Notices: Not Supported 00:11:45.417 Normal NVM Subsystem Shutdown event: Not Supported 00:11:45.417 Zone Descriptor Change Notices: Not Supported 00:11:45.417 Discovery Log Change Notices: Not Supported 00:11:45.417 Controller Attributes 00:11:45.417 128-bit Host Identifier: Supported 00:11:45.417 Non-Operational Permissive Mode: Not Supported 00:11:45.417 NVM Sets: Not Supported 00:11:45.417 Read Recovery Levels: Not Supported 00:11:45.417 Endurance Groups: Not Supported 00:11:45.417 Predictable Latency Mode: Not Supported 00:11:45.417 Traffic Based Keep ALive: Not Supported 00:11:45.417 Namespace Granularity: Not Supported 00:11:45.417 SQ Associations: Not Supported 00:11:45.417 UUID List: Not Supported 00:11:45.417 Multi-Domain Subsystem: Not Supported 00:11:45.417 Fixed Capacity Management: Not Supported 00:11:45.417 Variable Capacity Management: Not Supported 00:11:45.417 Delete Endurance Group: Not Supported 00:11:45.417 Delete NVM Set: Not Supported 00:11:45.417 Extended LBA Formats Supported: Not Supported 00:11:45.417 Flexible Data Placement Supported: Not Supported 00:11:45.417 00:11:45.417 Controller Memory Buffer Support 00:11:45.417 ================================ 00:11:45.417 Supported: No 00:11:45.417 00:11:45.418 Persistent Memory Region Support 00:11:45.418 ================================ 00:11:45.418 Supported: No 00:11:45.418 00:11:45.418 Admin Command Set Attributes 00:11:45.418 ============================ 00:11:45.418 Security Send/Receive: Not Supported 00:11:45.418 Format NVM: Not Supported 00:11:45.418 Firmware Activate/Download: Not Supported 00:11:45.418 Namespace Management: Not Supported 00:11:45.418 Device Self-Test: Not Supported 00:11:45.418 Directives: Not Supported 00:11:45.418 NVMe-MI: Not Supported 00:11:45.418 Virtualization Management: Not Supported 00:11:45.418 Doorbell Buffer Config: Not Supported 00:11:45.418 Get LBA Status Capability: Not Supported 00:11:45.418 Command & Feature Lockdown Capability: Not Supported 00:11:45.418 Abort Command Limit: 4 00:11:45.418 Async Event Request Limit: 4 00:11:45.418 Number of Firmware Slots: N/A 00:11:45.418 Firmware Slot 1 Read-Only: N/A 00:11:45.418 Firmware Activation Without Reset: N/A 00:11:45.418 Multiple Update Detection Support: N/A 00:11:45.418 Firmware Update Granularity: No Information Provided 00:11:45.418 Per-Namespace SMART Log: No 00:11:45.418 Asymmetric Namespace Access Log Page: Not Supported 00:11:45.418 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:45.418 Command Effects Log Page: Supported 00:11:45.418 Get Log Page Extended Data: Supported 00:11:45.418 Telemetry Log Pages: Not Supported 00:11:45.418 Persistent Event Log Pages: Not Supported 00:11:45.418 Supported Log Pages Log Page: May Support 00:11:45.418 Commands Supported & Effects Log Page: Not Supported 00:11:45.418 Feature Identifiers & Effects Log Page:May Support 00:11:45.418 NVMe-MI Commands & Effects Log Page: May Support 00:11:45.418 Data Area 4 for Telemetry Log: Not Supported 00:11:45.418 Error Log Page Entries Supported: 128 00:11:45.418 Keep Alive: Supported 00:11:45.418 Keep Alive Granularity: 10000 ms 00:11:45.418 00:11:45.418 NVM Command Set Attributes 00:11:45.418 ========================== 00:11:45.418 Submission Queue Entry Size 00:11:45.418 Max: 64 00:11:45.418 Min: 64 00:11:45.418 Completion Queue Entry Size 00:11:45.418 Max: 16 00:11:45.418 Min: 16 00:11:45.418 Number of Namespaces: 32 00:11:45.418 Compare Command: Supported 00:11:45.418 Write Uncorrectable Command: Not Supported 00:11:45.418 Dataset Management Command: Supported 00:11:45.418 Write Zeroes Command: Supported 00:11:45.418 Set Features Save Field: Not Supported 00:11:45.418 Reservations: Not Supported 00:11:45.418 Timestamp: Not Supported 00:11:45.418 Copy: Supported 00:11:45.418 Volatile Write Cache: Present 00:11:45.418 Atomic Write Unit (Normal): 1 00:11:45.418 Atomic Write Unit (PFail): 1 00:11:45.418 Atomic Compare & Write Unit: 1 00:11:45.418 Fused Compare & Write: Supported 00:11:45.418 Scatter-Gather List 00:11:45.418 SGL Command Set: Supported (Dword aligned) 00:11:45.418 SGL Keyed: Not Supported 00:11:45.418 SGL Bit Bucket Descriptor: Not Supported 00:11:45.418 SGL Metadata Pointer: Not Supported 00:11:45.418 Oversized SGL: Not Supported 00:11:45.418 SGL Metadata Address: Not Supported 00:11:45.418 SGL Offset: Not Supported 00:11:45.418 Transport SGL Data Block: Not Supported 00:11:45.418 Replay Protected Memory Block: Not Supported 00:11:45.418 00:11:45.418 Firmware Slot Information 00:11:45.418 ========================= 00:11:45.418 Active slot: 1 00:11:45.418 Slot 1 Firmware Revision: 24.09 00:11:45.418 00:11:45.418 00:11:45.418 Commands Supported and Effects 00:11:45.418 ============================== 00:11:45.418 Admin Commands 00:11:45.418 -------------- 00:11:45.418 Get Log Page (02h): Supported 00:11:45.418 Identify (06h): Supported 00:11:45.418 Abort (08h): Supported 00:11:45.418 Set Features (09h): Supported 00:11:45.418 Get Features (0Ah): Supported 00:11:45.418 Asynchronous Event Request (0Ch): Supported 00:11:45.418 Keep Alive (18h): Supported 00:11:45.418 I/O Commands 00:11:45.418 ------------ 00:11:45.418 Flush (00h): Supported LBA-Change 00:11:45.418 Write (01h): Supported LBA-Change 00:11:45.418 Read (02h): Supported 00:11:45.418 Compare (05h): Supported 00:11:45.418 Write Zeroes (08h): Supported LBA-Change 00:11:45.418 Dataset Management (09h): Supported LBA-Change 00:11:45.418 Copy (19h): Supported LBA-Change 00:11:45.418 00:11:45.418 Error Log 00:11:45.418 ========= 00:11:45.418 00:11:45.418 Arbitration 00:11:45.418 =========== 00:11:45.418 Arbitration Burst: 1 00:11:45.418 00:11:45.418 Power Management 00:11:45.418 ================ 00:11:45.418 Number of Power States: 1 00:11:45.418 Current Power State: Power State #0 00:11:45.418 Power State #0: 00:11:45.418 Max Power: 0.00 W 00:11:45.418 Non-Operational State: Operational 00:11:45.418 Entry Latency: Not Reported 00:11:45.418 Exit Latency: Not Reported 00:11:45.418 Relative Read Throughput: 0 00:11:45.418 Relative Read Latency: 0 00:11:45.418 Relative Write Throughput: 0 00:11:45.418 Relative Write Latency: 0 00:11:45.418 Idle Power: Not Reported 00:11:45.418 Active Power: Not Reported 00:11:45.418 Non-Operational Permissive Mode: Not Supported 00:11:45.418 00:11:45.418 Health Information 00:11:45.418 ================== 00:11:45.418 Critical Warnings: 00:11:45.418 Available Spare Space: OK 00:11:45.418 Temperature: OK 00:11:45.418 Device Reliability: OK 00:11:45.418 Read Only: No 00:11:45.418 Volatile Memory Backup: OK 00:11:45.418 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:45.418 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:45.418 Available Spare: 0% 00:11:45.418 Available Sp[2024-07-24 18:54:22.763564] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:45.418 [2024-07-24 18:54:22.763583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:45.418 [2024-07-24 18:54:22.763622] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:45.418 [2024-07-24 18:54:22.763639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.418 [2024-07-24 18:54:22.763650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.418 [2024-07-24 18:54:22.763660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.418 [2024-07-24 18:54:22.763669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.418 [2024-07-24 18:54:22.763897] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:45.418 [2024-07-24 18:54:22.763916] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:45.418 [2024-07-24 18:54:22.764893] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:45.418 [2024-07-24 18:54:22.764967] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:45.418 [2024-07-24 18:54:22.764981] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:45.418 [2024-07-24 18:54:22.765905] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:45.418 [2024-07-24 18:54:22.765927] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:45.418 [2024-07-24 18:54:22.765981] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:45.418 [2024-07-24 18:54:22.767948] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:45.418 are Threshold: 0% 00:11:45.418 Life Percentage Used: 0% 00:11:45.418 Data Units Read: 0 00:11:45.418 Data Units Written: 0 00:11:45.418 Host Read Commands: 0 00:11:45.418 Host Write Commands: 0 00:11:45.418 Controller Busy Time: 0 minutes 00:11:45.418 Power Cycles: 0 00:11:45.418 Power On Hours: 0 hours 00:11:45.418 Unsafe Shutdowns: 0 00:11:45.418 Unrecoverable Media Errors: 0 00:11:45.418 Lifetime Error Log Entries: 0 00:11:45.418 Warning Temperature Time: 0 minutes 00:11:45.418 Critical Temperature Time: 0 minutes 00:11:45.418 00:11:45.418 Number of Queues 00:11:45.418 ================ 00:11:45.418 Number of I/O Submission Queues: 127 00:11:45.418 Number of I/O Completion Queues: 127 00:11:45.418 00:11:45.418 Active Namespaces 00:11:45.418 ================= 00:11:45.418 Namespace ID:1 00:11:45.418 Error Recovery Timeout: Unlimited 00:11:45.418 Command Set Identifier: NVM (00h) 00:11:45.418 Deallocate: Supported 00:11:45.418 Deallocated/Unwritten Error: Not Supported 00:11:45.418 Deallocated Read Value: Unknown 00:11:45.418 Deallocate in Write Zeroes: Not Supported 00:11:45.418 Deallocated Guard Field: 0xFFFF 00:11:45.419 Flush: Supported 00:11:45.419 Reservation: Supported 00:11:45.419 Namespace Sharing Capabilities: Multiple Controllers 00:11:45.419 Size (in LBAs): 131072 (0GiB) 00:11:45.419 Capacity (in LBAs): 131072 (0GiB) 00:11:45.419 Utilization (in LBAs): 131072 (0GiB) 00:11:45.419 NGUID: 755B06B8757D45A999F087852DB90F48 00:11:45.419 UUID: 755b06b8-757d-45a9-99f0-87852db90f48 00:11:45.419 Thin Provisioning: Not Supported 00:11:45.419 Per-NS Atomic Units: Yes 00:11:45.419 Atomic Boundary Size (Normal): 0 00:11:45.419 Atomic Boundary Size (PFail): 0 00:11:45.419 Atomic Boundary Offset: 0 00:11:45.419 Maximum Single Source Range Length: 65535 00:11:45.419 Maximum Copy Length: 65535 00:11:45.419 Maximum Source Range Count: 1 00:11:45.419 NGUID/EUI64 Never Reused: No 00:11:45.419 Namespace Write Protected: No 00:11:45.419 Number of LBA Formats: 1 00:11:45.419 Current LBA Format: LBA Format #00 00:11:45.419 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:45.419 00:11:45.419 18:54:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:45.419 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.419 [2024-07-24 18:54:23.006009] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:50.680 Initializing NVMe Controllers 00:11:50.680 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:50.680 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:50.680 Initialization complete. Launching workers. 00:11:50.680 ======================================================== 00:11:50.680 Latency(us) 00:11:50.680 Device Information : IOPS MiB/s Average min max 00:11:50.680 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33414.15 130.52 3829.94 1181.39 7368.21 00:11:50.680 ======================================================== 00:11:50.680 Total : 33414.15 130.52 3829.94 1181.39 7368.21 00:11:50.680 00:11:50.680 [2024-07-24 18:54:28.025693] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:50.680 18:54:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:50.680 EAL: No free 2048 kB hugepages reported on node 1 00:11:50.680 [2024-07-24 18:54:28.260841] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:55.937 Initializing NVMe Controllers 00:11:55.937 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:55.937 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:55.937 Initialization complete. Launching workers. 00:11:55.937 ======================================================== 00:11:55.937 Latency(us) 00:11:55.937 Device Information : IOPS MiB/s Average min max 00:11:55.937 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16000.00 62.50 8006.79 7523.82 12010.21 00:11:55.937 ======================================================== 00:11:55.937 Total : 16000.00 62.50 8006.79 7523.82 12010.21 00:11:55.937 00:11:55.937 [2024-07-24 18:54:33.295512] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:55.937 18:54:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:55.937 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.937 [2024-07-24 18:54:33.503571] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:01.199 [2024-07-24 18:54:38.573456] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:01.199 Initializing NVMe Controllers 00:12:01.199 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:01.199 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:01.199 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:01.199 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:01.199 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:01.199 Initialization complete. Launching workers. 00:12:01.199 Starting thread on core 2 00:12:01.199 Starting thread on core 3 00:12:01.199 Starting thread on core 1 00:12:01.199 18:54:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:01.199 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.459 [2024-07-24 18:54:38.881443] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:04.800 [2024-07-24 18:54:41.952807] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:04.800 Initializing NVMe Controllers 00:12:04.800 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:04.800 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:04.800 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:04.800 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:04.800 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:04.800 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:04.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:04.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:04.800 Initialization complete. Launching workers. 00:12:04.800 Starting thread on core 1 with urgent priority queue 00:12:04.800 Starting thread on core 2 with urgent priority queue 00:12:04.800 Starting thread on core 3 with urgent priority queue 00:12:04.800 Starting thread on core 0 with urgent priority queue 00:12:04.800 SPDK bdev Controller (SPDK1 ) core 0: 3958.67 IO/s 25.26 secs/100000 ios 00:12:04.800 SPDK bdev Controller (SPDK1 ) core 1: 5600.00 IO/s 17.86 secs/100000 ios 00:12:04.800 SPDK bdev Controller (SPDK1 ) core 2: 5614.00 IO/s 17.81 secs/100000 ios 00:12:04.800 SPDK bdev Controller (SPDK1 ) core 3: 5609.33 IO/s 17.83 secs/100000 ios 00:12:04.800 ======================================================== 00:12:04.800 00:12:04.800 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:04.800 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.800 [2024-07-24 18:54:42.250709] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:04.800 Initializing NVMe Controllers 00:12:04.800 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:04.800 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:04.800 Namespace ID: 1 size: 0GB 00:12:04.800 Initialization complete. 00:12:04.800 INFO: using host memory buffer for IO 00:12:04.800 Hello world! 00:12:04.800 [2024-07-24 18:54:42.284294] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:04.800 18:54:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:04.800 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.060 [2024-07-24 18:54:42.570630] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:05.996 Initializing NVMe Controllers 00:12:05.996 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:05.996 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:05.996 Initialization complete. Launching workers. 00:12:05.996 submit (in ns) avg, min, max = 7093.5, 3571.1, 4016053.3 00:12:05.996 complete (in ns) avg, min, max = 26491.3, 2086.7, 5993556.7 00:12:05.996 00:12:05.996 Submit histogram 00:12:05.996 ================ 00:12:05.996 Range in us Cumulative Count 00:12:05.996 3.556 - 3.579: 0.1703% ( 22) 00:12:05.996 3.579 - 3.603: 2.3376% ( 280) 00:12:05.996 3.603 - 3.627: 7.2761% ( 638) 00:12:05.996 3.627 - 3.650: 16.1003% ( 1140) 00:12:05.996 3.650 - 3.674: 24.3827% ( 1070) 00:12:05.996 3.674 - 3.698: 33.4701% ( 1174) 00:12:05.996 3.698 - 3.721: 41.5202% ( 1040) 00:12:05.996 3.721 - 3.745: 48.0687% ( 846) 00:12:05.996 3.745 - 3.769: 53.4639% ( 697) 00:12:05.996 3.769 - 3.793: 57.4348% ( 513) 00:12:05.996 3.793 - 3.816: 61.5760% ( 535) 00:12:05.996 3.816 - 3.840: 65.4617% ( 502) 00:12:05.996 3.840 - 3.864: 69.5642% ( 530) 00:12:05.996 3.864 - 3.887: 73.9995% ( 573) 00:12:05.996 3.887 - 3.911: 78.3497% ( 562) 00:12:05.996 3.911 - 3.935: 82.0961% ( 484) 00:12:05.996 3.935 - 3.959: 84.9524% ( 369) 00:12:05.996 3.959 - 3.982: 86.9030% ( 252) 00:12:05.996 3.982 - 4.006: 88.7298% ( 236) 00:12:05.996 4.006 - 4.030: 90.2082% ( 191) 00:12:05.996 4.030 - 4.053: 91.3925% ( 153) 00:12:05.996 4.053 - 4.077: 92.5226% ( 146) 00:12:05.996 4.077 - 4.101: 93.4360% ( 118) 00:12:05.996 4.101 - 4.124: 94.4578% ( 132) 00:12:05.996 4.124 - 4.148: 95.1389% ( 88) 00:12:05.996 4.148 - 4.172: 95.6034% ( 60) 00:12:05.996 4.172 - 4.196: 95.9749% ( 48) 00:12:05.996 4.196 - 4.219: 96.2613% ( 37) 00:12:05.996 4.219 - 4.243: 96.4781% ( 28) 00:12:05.996 4.243 - 4.267: 96.6329% ( 20) 00:12:05.996 4.267 - 4.290: 96.7412% ( 14) 00:12:05.996 4.290 - 4.314: 96.8419% ( 13) 00:12:05.996 4.314 - 4.338: 96.9347% ( 12) 00:12:05.996 4.338 - 4.361: 97.0199% ( 11) 00:12:05.996 4.361 - 4.385: 97.1050% ( 11) 00:12:05.996 4.385 - 4.409: 97.1205% ( 2) 00:12:05.996 4.409 - 4.433: 97.2057% ( 11) 00:12:05.996 4.433 - 4.456: 97.2598% ( 7) 00:12:05.996 4.456 - 4.480: 97.2908% ( 4) 00:12:05.996 4.480 - 4.504: 97.2986% ( 1) 00:12:05.996 4.504 - 4.527: 97.3140% ( 2) 00:12:05.996 4.527 - 4.551: 97.3218% ( 1) 00:12:05.996 4.551 - 4.575: 97.3527% ( 4) 00:12:05.996 4.575 - 4.599: 97.3605% ( 1) 00:12:05.996 4.646 - 4.670: 97.3837% ( 3) 00:12:05.996 4.670 - 4.693: 97.3992% ( 2) 00:12:05.996 4.693 - 4.717: 97.4301% ( 4) 00:12:05.996 4.717 - 4.741: 97.4456% ( 2) 00:12:05.996 4.741 - 4.764: 97.4843% ( 5) 00:12:05.996 4.764 - 4.788: 97.5308% ( 6) 00:12:05.996 4.788 - 4.812: 97.5462% ( 2) 00:12:05.996 4.812 - 4.836: 97.5772% ( 4) 00:12:05.996 4.836 - 4.859: 97.6469% ( 9) 00:12:05.997 4.859 - 4.883: 97.6701% ( 3) 00:12:05.997 4.883 - 4.907: 97.7011% ( 4) 00:12:05.997 4.907 - 4.930: 97.7552% ( 7) 00:12:05.997 4.930 - 4.954: 97.7707% ( 2) 00:12:05.997 4.954 - 4.978: 97.8172% ( 6) 00:12:05.997 4.978 - 5.001: 97.8559% ( 5) 00:12:05.997 5.001 - 5.025: 97.9023% ( 6) 00:12:05.997 5.025 - 5.049: 97.9333% ( 4) 00:12:05.997 5.049 - 5.073: 97.9797% ( 6) 00:12:05.997 5.073 - 5.096: 98.0029% ( 3) 00:12:05.997 5.096 - 5.120: 98.0107% ( 1) 00:12:05.997 5.120 - 5.144: 98.0339% ( 3) 00:12:05.997 5.144 - 5.167: 98.0416% ( 1) 00:12:05.997 5.167 - 5.191: 98.0571% ( 2) 00:12:05.997 5.191 - 5.215: 98.0726% ( 2) 00:12:05.997 5.239 - 5.262: 98.0881% ( 2) 00:12:05.997 5.286 - 5.310: 98.1036% ( 2) 00:12:05.997 5.333 - 5.357: 98.1113% ( 1) 00:12:05.997 5.357 - 5.381: 98.1190% ( 1) 00:12:05.997 5.452 - 5.476: 98.1268% ( 1) 00:12:05.997 5.499 - 5.523: 98.1345% ( 1) 00:12:05.997 5.570 - 5.594: 98.1423% ( 1) 00:12:05.997 5.618 - 5.641: 98.1500% ( 1) 00:12:05.997 5.855 - 5.879: 98.1578% ( 1) 00:12:05.997 5.973 - 5.997: 98.1732% ( 2) 00:12:05.997 6.044 - 6.068: 98.1810% ( 1) 00:12:05.997 6.210 - 6.258: 98.1887% ( 1) 00:12:05.997 6.258 - 6.305: 98.1965% ( 1) 00:12:05.997 6.305 - 6.353: 98.2197% ( 3) 00:12:05.997 6.590 - 6.637: 98.2352% ( 2) 00:12:05.997 6.637 - 6.684: 98.2429% ( 1) 00:12:05.997 6.827 - 6.874: 98.2506% ( 1) 00:12:05.997 6.874 - 6.921: 98.2584% ( 1) 00:12:05.997 6.969 - 7.016: 98.2739% ( 2) 00:12:05.997 7.016 - 7.064: 98.2816% ( 1) 00:12:05.997 7.064 - 7.111: 98.2971% ( 2) 00:12:05.997 7.159 - 7.206: 98.3048% ( 1) 00:12:05.997 7.206 - 7.253: 98.3126% ( 1) 00:12:05.997 7.253 - 7.301: 98.3280% ( 2) 00:12:05.997 7.301 - 7.348: 98.3358% ( 1) 00:12:05.997 7.348 - 7.396: 98.3435% ( 1) 00:12:05.997 7.490 - 7.538: 98.3590% ( 2) 00:12:05.997 7.538 - 7.585: 98.3667% ( 1) 00:12:05.997 7.585 - 7.633: 98.3745% ( 1) 00:12:05.997 7.633 - 7.680: 98.3822% ( 1) 00:12:05.997 7.727 - 7.775: 98.3900% ( 1) 00:12:05.997 7.775 - 7.822: 98.4209% ( 4) 00:12:05.997 7.870 - 7.917: 98.4287% ( 1) 00:12:05.997 7.917 - 7.964: 98.4364% ( 1) 00:12:05.997 7.964 - 8.012: 98.4519% ( 2) 00:12:05.997 8.012 - 8.059: 98.4596% ( 1) 00:12:05.997 8.059 - 8.107: 98.4751% ( 2) 00:12:05.997 8.107 - 8.154: 98.4829% ( 1) 00:12:05.997 8.296 - 8.344: 98.4906% ( 1) 00:12:05.997 8.391 - 8.439: 98.4983% ( 1) 00:12:05.997 8.486 - 8.533: 98.5061% ( 1) 00:12:05.997 8.676 - 8.723: 98.5138% ( 1) 00:12:05.997 8.723 - 8.770: 98.5216% ( 1) 00:12:05.997 8.770 - 8.818: 98.5293% ( 1) 00:12:05.997 8.818 - 8.865: 98.5370% ( 1) 00:12:05.997 8.865 - 8.913: 98.5448% ( 1) 00:12:05.997 9.102 - 9.150: 98.5525% ( 1) 00:12:05.997 9.244 - 9.292: 98.5603% ( 1) 00:12:05.997 9.292 - 9.339: 98.5757% ( 2) 00:12:05.997 9.339 - 9.387: 98.5835% ( 1) 00:12:05.997 9.481 - 9.529: 98.5912% ( 1) 00:12:05.997 9.624 - 9.671: 98.5990% ( 1) 00:12:05.997 9.719 - 9.766: 98.6067% ( 1) 00:12:05.997 9.813 - 9.861: 98.6144% ( 1) 00:12:05.997 9.908 - 9.956: 98.6222% ( 1) 00:12:05.997 10.335 - 10.382: 98.6299% ( 1) 00:12:05.997 10.477 - 10.524: 98.6377% ( 1) 00:12:05.997 10.572 - 10.619: 98.6454% ( 1) 00:12:05.997 10.619 - 10.667: 98.6531% ( 1) 00:12:05.997 10.904 - 10.951: 98.6686% ( 2) 00:12:05.997 10.999 - 11.046: 98.6764% ( 1) 00:12:05.997 11.236 - 11.283: 98.6841% ( 1) 00:12:05.997 11.425 - 11.473: 98.6918% ( 1) 00:12:05.997 11.567 - 11.615: 98.7073% ( 2) 00:12:05.997 11.947 - 11.994: 98.7151% ( 1) 00:12:05.997 12.136 - 12.231: 98.7228% ( 1) 00:12:05.997 12.516 - 12.610: 98.7306% ( 1) 00:12:05.997 12.610 - 12.705: 98.7383% ( 1) 00:12:05.997 12.705 - 12.800: 98.7460% ( 1) 00:12:05.997 12.895 - 12.990: 98.7538% ( 1) 00:12:05.997 12.990 - 13.084: 98.7693% ( 2) 00:12:05.997 13.179 - 13.274: 98.7847% ( 2) 00:12:05.997 13.464 - 13.559: 98.7925% ( 1) 00:12:05.997 13.559 - 13.653: 98.8080% ( 2) 00:12:05.997 13.653 - 13.748: 98.8234% ( 2) 00:12:05.997 13.843 - 13.938: 98.8389% ( 2) 00:12:05.997 14.507 - 14.601: 98.8467% ( 1) 00:12:05.997 17.161 - 17.256: 98.8544% ( 1) 00:12:05.997 17.256 - 17.351: 98.8699% ( 2) 00:12:05.997 17.351 - 17.446: 98.9086% ( 5) 00:12:05.997 17.446 - 17.541: 98.9318% ( 3) 00:12:05.997 17.541 - 17.636: 98.9782% ( 6) 00:12:05.997 17.636 - 17.730: 99.0170% ( 5) 00:12:05.997 17.730 - 17.825: 99.0634% ( 6) 00:12:05.997 17.825 - 17.920: 99.1021% ( 5) 00:12:05.997 17.920 - 18.015: 99.1563% ( 7) 00:12:05.997 18.015 - 18.110: 99.1795% ( 3) 00:12:05.997 18.110 - 18.204: 99.2569% ( 10) 00:12:05.997 18.204 - 18.299: 99.2956% ( 5) 00:12:05.997 18.299 - 18.394: 99.3498% ( 7) 00:12:05.997 18.394 - 18.489: 99.4427% ( 12) 00:12:05.997 18.489 - 18.584: 99.5356% ( 12) 00:12:05.997 18.584 - 18.679: 99.5898% ( 7) 00:12:05.997 18.679 - 18.773: 99.6207% ( 4) 00:12:05.997 18.773 - 18.868: 99.6672% ( 6) 00:12:05.997 18.868 - 18.963: 99.7059% ( 5) 00:12:05.997 18.963 - 19.058: 99.7291% ( 3) 00:12:05.997 19.058 - 19.153: 99.7755% ( 6) 00:12:05.997 19.153 - 19.247: 99.7833% ( 1) 00:12:05.997 19.247 - 19.342: 99.7910% ( 1) 00:12:05.997 19.437 - 19.532: 99.7987% ( 1) 00:12:05.997 19.532 - 19.627: 99.8065% ( 1) 00:12:05.997 19.627 - 19.721: 99.8220% ( 2) 00:12:05.997 19.816 - 19.911: 99.8297% ( 1) 00:12:05.997 19.911 - 20.006: 99.8374% ( 1) 00:12:05.997 20.006 - 20.101: 99.8452% ( 1) 00:12:05.997 20.101 - 20.196: 99.8529% ( 1) 00:12:05.997 20.196 - 20.290: 99.8607% ( 1) 00:12:05.997 20.670 - 20.764: 99.8684% ( 1) 00:12:05.997 21.049 - 21.144: 99.8762% ( 1) 00:12:05.997 21.144 - 21.239: 99.8839% ( 1) 00:12:05.997 21.428 - 21.523: 99.8916% ( 1) 00:12:05.997 23.040 - 23.135: 99.8994% ( 1) 00:12:05.997 27.876 - 28.065: 99.9071% ( 1) 00:12:05.997 28.255 - 28.444: 99.9149% ( 1) 00:12:05.997 28.444 - 28.634: 99.9226% ( 1) 00:12:05.997 3980.705 - 4004.978: 99.9768% ( 7) 00:12:05.997 4004.978 - 4029.250: 100.0000% ( 3) 00:12:05.997 00:12:05.997 Complete histogram 00:12:05.997 ================== 00:12:05.997 Range in us Cumulative Count 00:12:05.997 2.086 - 2.098: 3.0653% ( 396) 00:12:05.997 2.098 - 2.110: 29.6695% ( 3437) 00:12:05.997 2.110 - 2.121: 35.4826% ( 751) 00:12:05.997 2.121 - 2.133: 41.6441% ( 796) 00:12:05.997 2.133 - 2.145: 55.8635% ( 1837) 00:12:05.997 2.145 - 2.157: 59.2848% ( 442) 00:12:05.997 2.157 - 2.169: 63.5885% ( 556) 00:12:05.997 2.169 - 2.181: 71.6387% ( 1040) 00:12:05.997 2.181 - 2.193: 73.2487% ( 208) 00:12:05.997 2.193 - 2.204: 76.8635% ( 467) 00:12:05.997 2.204 - 2.216: 81.5388% ( 604) 00:12:05.997 2.216 - 2.228: 82.8392% ( 168) 00:12:05.997 2.228 - 2.240: 83.8145% ( 126) 00:12:05.997 2.240 - 2.252: 87.7158% ( 504) 00:12:05.997 2.252 - 2.264: 90.2392% ( 326) 00:12:05.997 2.264 - 2.276: 91.4544% ( 157) 00:12:05.997 2.276 - 2.287: 93.3586% ( 246) 00:12:05.997 2.287 - 2.299: 93.8695% ( 66) 00:12:05.997 2.299 - 2.311: 94.1249% ( 33) 00:12:05.997 2.311 - 2.323: 94.4346% ( 40) 00:12:05.997 2.323 - 2.335: 95.2086% ( 100) 00:12:05.997 2.335 - 2.347: 95.4253% ( 28) 00:12:05.997 2.347 - 2.359: 95.5260% ( 13) 00:12:05.997 2.359 - 2.370: 95.6034% ( 10) 00:12:05.997 2.370 - 2.382: 95.6963% ( 12) 00:12:05.997 2.382 - 2.394: 95.8124% ( 15) 00:12:05.997 2.394 - 2.406: 96.0833% ( 35) 00:12:05.997 2.406 - 2.418: 96.4703% ( 50) 00:12:05.997 2.418 - 2.430: 96.7567% ( 37) 00:12:05.997 2.430 - 2.441: 96.9734% ( 28) 00:12:05.997 2.441 - 2.453: 97.1360% ( 21) 00:12:05.997 2.453 - 2.465: 97.2753% ( 18) 00:12:05.997 2.465 - 2.477: 97.4921% ( 28) 00:12:05.997 2.477 - 2.489: 97.6701% ( 23) 00:12:05.997 2.489 - 2.501: 97.8559% ( 24) 00:12:05.997 2.501 - 2.513: 98.0107% ( 20) 00:12:05.997 2.513 - 2.524: 98.0803% ( 9) 00:12:05.997 2.524 - 2.536: 98.1578% ( 10) 00:12:05.997 2.536 - 2.548: 98.2352% ( 10) 00:12:05.997 2.548 - 2.560: 98.2661% ( 4) 00:12:05.997 2.560 - 2.572: 98.3203% ( 7) 00:12:05.997 2.572 - 2.584: 98.3513% ( 4) 00:12:05.997 2.584 - 2.596: 98.3667% ( 2) 00:12:05.997 2.596 - 2.607: 98.3900% ( 3) 00:12:05.997 2.631 - 2.643: 98.3977% ( 1) 00:12:05.997 2.643 - 2.655: 98.4054% ( 1) 00:12:05.997 2.655 - 2.667: 98.4132% ( 1) 00:12:05.997 2.667 - 2.679: 98.4209% ( 1) 00:12:05.997 2.714 - 2.726: 98.4287% ( 1) 00:12:05.997 2.726 - 2.738: 98.4364% ( 1) 00:12:05.997 2.750 - 2.761: 98.4519% ( 2) 00:12:05.997 2.809 - 2.821: 98.4596% ( 1) 00:12:05.997 3.437 - 3.461: 98.4674% ( 1) 00:12:05.997 3.508 - 3.532: 98.4751% ( 1) 00:12:05.997 3.532 - 3.556: 98.4829% ( 1) 00:12:05.997 3.579 - 3.603: 98.4906% ( 1) 00:12:05.997 3.603 - 3.627: 98.4983% ( 1) 00:12:05.997 3.674 - 3.698: 98.5138% ( 2) 00:12:05.997 3.698 - 3.721: 98.5293% ( 2) 00:12:05.997 3.721 - 3.745: 98.5525% ( 3) 00:12:05.997 3.745 - 3.769: 98.5603% ( 1) 00:12:05.997 3.793 - 3.816: 98.5680% ( 1) 00:12:05.997 3.840 - 3.864: 98.5757% ( 1) 00:12:05.997 3.864 - 3.887: 98.5835% ( 1) 00:12:05.997 3.911 - 3.935: 98.5912% ( 1) 00:12:05.997 3.935 - 3.959: 9[2024-07-24 18:54:43.590032] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:06.255 8.5990% ( 1) 00:12:06.255 3.959 - 3.982: 98.6067% ( 1) 00:12:06.255 4.006 - 4.030: 98.6144% ( 1) 00:12:06.255 4.053 - 4.077: 98.6222% ( 1) 00:12:06.255 4.077 - 4.101: 98.6299% ( 1) 00:12:06.255 4.101 - 4.124: 98.6377% ( 1) 00:12:06.255 4.148 - 4.172: 98.6609% ( 3) 00:12:06.255 5.428 - 5.452: 98.6686% ( 1) 00:12:06.255 5.523 - 5.547: 98.6764% ( 1) 00:12:06.255 5.594 - 5.618: 98.6841% ( 1) 00:12:06.255 5.618 - 5.641: 98.6918% ( 1) 00:12:06.255 5.665 - 5.689: 98.6996% ( 1) 00:12:06.255 5.736 - 5.760: 98.7073% ( 1) 00:12:06.255 6.068 - 6.116: 98.7151% ( 1) 00:12:06.255 6.116 - 6.163: 98.7228% ( 1) 00:12:06.255 6.258 - 6.305: 98.7306% ( 1) 00:12:06.255 6.305 - 6.353: 98.7383% ( 1) 00:12:06.255 6.447 - 6.495: 98.7460% ( 1) 00:12:06.255 6.495 - 6.542: 98.7538% ( 1) 00:12:06.255 6.779 - 6.827: 98.7615% ( 1) 00:12:06.255 7.348 - 7.396: 98.7693% ( 1) 00:12:06.255 10.050 - 10.098: 98.7770% ( 1) 00:12:06.255 10.098 - 10.145: 98.7847% ( 1) 00:12:06.255 12.421 - 12.516: 98.7925% ( 1) 00:12:06.255 15.550 - 15.644: 98.8002% ( 1) 00:12:06.255 15.739 - 15.834: 98.8234% ( 3) 00:12:06.255 15.834 - 15.929: 98.8544% ( 4) 00:12:06.255 15.929 - 16.024: 98.8699% ( 2) 00:12:06.255 16.024 - 16.119: 98.8931% ( 3) 00:12:06.255 16.119 - 16.213: 98.9318% ( 5) 00:12:06.255 16.213 - 16.308: 98.9860% ( 7) 00:12:06.255 16.308 - 16.403: 99.0324% ( 6) 00:12:06.255 16.403 - 16.498: 99.0944% ( 8) 00:12:06.255 16.498 - 16.593: 99.1021% ( 1) 00:12:06.255 16.593 - 16.687: 99.1176% ( 2) 00:12:06.255 16.687 - 16.782: 99.1640% ( 6) 00:12:06.255 16.782 - 16.877: 99.2182% ( 7) 00:12:06.255 16.877 - 16.972: 99.2646% ( 6) 00:12:06.255 16.972 - 17.067: 99.2724% ( 1) 00:12:06.255 17.067 - 17.161: 99.2801% ( 1) 00:12:06.255 17.256 - 17.351: 99.2879% ( 1) 00:12:06.255 17.446 - 17.541: 99.2956% ( 1) 00:12:06.255 17.541 - 17.636: 99.3188% ( 3) 00:12:06.255 17.636 - 17.730: 99.3266% ( 1) 00:12:06.255 17.730 - 17.825: 99.3343% ( 1) 00:12:06.255 17.825 - 17.920: 99.3421% ( 1) 00:12:06.255 17.920 - 18.015: 99.3498% ( 1) 00:12:06.255 18.015 - 18.110: 99.3575% ( 1) 00:12:06.255 18.299 - 18.394: 99.3653% ( 1) 00:12:06.255 18.773 - 18.868: 99.3730% ( 1) 00:12:06.255 18.963 - 19.058: 99.3808% ( 1) 00:12:06.255 20.670 - 20.764: 99.3885% ( 1) 00:12:06.255 2014.625 - 2026.761: 99.4040% ( 2) 00:12:06.255 3021.938 - 3034.074: 99.4117% ( 1) 00:12:06.255 3082.619 - 3094.756: 99.4195% ( 1) 00:12:06.255 3980.705 - 4004.978: 99.9071% ( 63) 00:12:06.255 4004.978 - 4029.250: 99.9845% ( 10) 00:12:06.255 4975.881 - 5000.154: 99.9923% ( 1) 00:12:06.255 5971.058 - 5995.330: 100.0000% ( 1) 00:12:06.255 00:12:06.255 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:06.255 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:06.255 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:06.255 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:06.255 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:06.513 [ 00:12:06.513 { 00:12:06.513 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:06.513 "subtype": "Discovery", 00:12:06.513 "listen_addresses": [], 00:12:06.513 "allow_any_host": true, 00:12:06.513 "hosts": [] 00:12:06.513 }, 00:12:06.513 { 00:12:06.513 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:06.513 "subtype": "NVMe", 00:12:06.513 "listen_addresses": [ 00:12:06.513 { 00:12:06.513 "trtype": "VFIOUSER", 00:12:06.513 "adrfam": "IPv4", 00:12:06.513 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:06.513 "trsvcid": "0" 00:12:06.513 } 00:12:06.513 ], 00:12:06.513 "allow_any_host": true, 00:12:06.513 "hosts": [], 00:12:06.513 "serial_number": "SPDK1", 00:12:06.513 "model_number": "SPDK bdev Controller", 00:12:06.513 "max_namespaces": 32, 00:12:06.513 "min_cntlid": 1, 00:12:06.513 "max_cntlid": 65519, 00:12:06.513 "namespaces": [ 00:12:06.513 { 00:12:06.513 "nsid": 1, 00:12:06.513 "bdev_name": "Malloc1", 00:12:06.513 "name": "Malloc1", 00:12:06.513 "nguid": "755B06B8757D45A999F087852DB90F48", 00:12:06.513 "uuid": "755b06b8-757d-45a9-99f0-87852db90f48" 00:12:06.513 } 00:12:06.513 ] 00:12:06.513 }, 00:12:06.513 { 00:12:06.513 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:06.513 "subtype": "NVMe", 00:12:06.513 "listen_addresses": [ 00:12:06.513 { 00:12:06.513 "trtype": "VFIOUSER", 00:12:06.513 "adrfam": "IPv4", 00:12:06.513 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:06.513 "trsvcid": "0" 00:12:06.513 } 00:12:06.513 ], 00:12:06.513 "allow_any_host": true, 00:12:06.513 "hosts": [], 00:12:06.513 "serial_number": "SPDK2", 00:12:06.513 "model_number": "SPDK bdev Controller", 00:12:06.513 "max_namespaces": 32, 00:12:06.513 "min_cntlid": 1, 00:12:06.513 "max_cntlid": 65519, 00:12:06.513 "namespaces": [ 00:12:06.513 { 00:12:06.513 "nsid": 1, 00:12:06.513 "bdev_name": "Malloc2", 00:12:06.513 "name": "Malloc2", 00:12:06.513 "nguid": "06F1E65AB0A54A0488BC7E7ADF765AD3", 00:12:06.513 "uuid": "06f1e65a-b0a5-4a04-88bc-7e7adf765ad3" 00:12:06.513 } 00:12:06.513 ] 00:12:06.513 } 00:12:06.513 ] 00:12:06.513 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:06.513 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3126964 00:12:06.513 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:06.513 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:06.513 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:06.513 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:06.513 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:06.513 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:06.513 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:06.513 18:54:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:06.513 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.513 [2024-07-24 18:54:44.044604] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:06.771 Malloc3 00:12:06.771 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:07.029 [2024-07-24 18:54:44.397237] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:07.029 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:07.029 Asynchronous Event Request test 00:12:07.029 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:07.029 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:07.029 Registering asynchronous event callbacks... 00:12:07.029 Starting namespace attribute notice tests for all controllers... 00:12:07.029 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:07.029 aer_cb - Changed Namespace 00:12:07.029 Cleaning up... 00:12:07.288 [ 00:12:07.288 { 00:12:07.288 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:07.288 "subtype": "Discovery", 00:12:07.288 "listen_addresses": [], 00:12:07.288 "allow_any_host": true, 00:12:07.288 "hosts": [] 00:12:07.288 }, 00:12:07.288 { 00:12:07.288 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:07.288 "subtype": "NVMe", 00:12:07.288 "listen_addresses": [ 00:12:07.288 { 00:12:07.288 "trtype": "VFIOUSER", 00:12:07.288 "adrfam": "IPv4", 00:12:07.288 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:07.288 "trsvcid": "0" 00:12:07.288 } 00:12:07.288 ], 00:12:07.288 "allow_any_host": true, 00:12:07.288 "hosts": [], 00:12:07.288 "serial_number": "SPDK1", 00:12:07.288 "model_number": "SPDK bdev Controller", 00:12:07.288 "max_namespaces": 32, 00:12:07.288 "min_cntlid": 1, 00:12:07.288 "max_cntlid": 65519, 00:12:07.288 "namespaces": [ 00:12:07.288 { 00:12:07.288 "nsid": 1, 00:12:07.288 "bdev_name": "Malloc1", 00:12:07.288 "name": "Malloc1", 00:12:07.288 "nguid": "755B06B8757D45A999F087852DB90F48", 00:12:07.288 "uuid": "755b06b8-757d-45a9-99f0-87852db90f48" 00:12:07.288 }, 00:12:07.288 { 00:12:07.288 "nsid": 2, 00:12:07.288 "bdev_name": "Malloc3", 00:12:07.288 "name": "Malloc3", 00:12:07.288 "nguid": "680857F4862649719282240F4A6B44BF", 00:12:07.288 "uuid": "680857f4-8626-4971-9282-240f4a6b44bf" 00:12:07.288 } 00:12:07.288 ] 00:12:07.288 }, 00:12:07.288 { 00:12:07.288 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:07.288 "subtype": "NVMe", 00:12:07.288 "listen_addresses": [ 00:12:07.288 { 00:12:07.288 "trtype": "VFIOUSER", 00:12:07.288 "adrfam": "IPv4", 00:12:07.288 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:07.288 "trsvcid": "0" 00:12:07.288 } 00:12:07.288 ], 00:12:07.288 "allow_any_host": true, 00:12:07.288 "hosts": [], 00:12:07.288 "serial_number": "SPDK2", 00:12:07.288 "model_number": "SPDK bdev Controller", 00:12:07.288 "max_namespaces": 32, 00:12:07.288 "min_cntlid": 1, 00:12:07.288 "max_cntlid": 65519, 00:12:07.288 "namespaces": [ 00:12:07.288 { 00:12:07.288 "nsid": 1, 00:12:07.288 "bdev_name": "Malloc2", 00:12:07.288 "name": "Malloc2", 00:12:07.288 "nguid": "06F1E65AB0A54A0488BC7E7ADF765AD3", 00:12:07.288 "uuid": "06f1e65a-b0a5-4a04-88bc-7e7adf765ad3" 00:12:07.288 } 00:12:07.288 ] 00:12:07.288 } 00:12:07.288 ] 00:12:07.288 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3126964 00:12:07.288 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:07.288 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:07.288 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:07.288 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:07.288 [2024-07-24 18:54:44.691190] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:12:07.288 [2024-07-24 18:54:44.691233] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3127097 ] 00:12:07.288 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.288 [2024-07-24 18:54:44.726203] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:07.288 [2024-07-24 18:54:44.732402] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:07.288 [2024-07-24 18:54:44.732432] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f46f8fd8000 00:12:07.288 [2024-07-24 18:54:44.733406] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.288 [2024-07-24 18:54:44.734408] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.288 [2024-07-24 18:54:44.735410] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.288 [2024-07-24 18:54:44.736407] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:07.288 [2024-07-24 18:54:44.737428] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:07.288 [2024-07-24 18:54:44.738419] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.288 [2024-07-24 18:54:44.739427] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:07.288 [2024-07-24 18:54:44.740431] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.288 [2024-07-24 18:54:44.741448] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:07.288 [2024-07-24 18:54:44.741469] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f46f8fcd000 00:12:07.288 [2024-07-24 18:54:44.742582] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:07.288 [2024-07-24 18:54:44.756791] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:07.288 [2024-07-24 18:54:44.756822] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:07.288 [2024-07-24 18:54:44.761934] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:07.288 [2024-07-24 18:54:44.761987] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:07.288 [2024-07-24 18:54:44.762075] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:07.288 [2024-07-24 18:54:44.762118] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:07.288 [2024-07-24 18:54:44.762130] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:07.288 [2024-07-24 18:54:44.762940] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:07.288 [2024-07-24 18:54:44.762966] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:07.288 [2024-07-24 18:54:44.762980] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:07.288 [2024-07-24 18:54:44.763944] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:07.288 [2024-07-24 18:54:44.763964] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:07.288 [2024-07-24 18:54:44.763977] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:07.288 [2024-07-24 18:54:44.764949] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:07.288 [2024-07-24 18:54:44.764970] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:07.288 [2024-07-24 18:54:44.765956] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:07.288 [2024-07-24 18:54:44.765975] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:07.288 [2024-07-24 18:54:44.765984] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:07.288 [2024-07-24 18:54:44.765995] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:07.288 [2024-07-24 18:54:44.766107] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:07.289 [2024-07-24 18:54:44.766117] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:07.289 [2024-07-24 18:54:44.766126] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:07.289 [2024-07-24 18:54:44.766964] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:07.289 [2024-07-24 18:54:44.767971] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:07.289 [2024-07-24 18:54:44.768983] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:07.289 [2024-07-24 18:54:44.769975] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:07.289 [2024-07-24 18:54:44.770039] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:07.289 [2024-07-24 18:54:44.770997] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:07.289 [2024-07-24 18:54:44.771016] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:07.289 [2024-07-24 18:54:44.771025] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:07.289 [2024-07-24 18:54:44.771048] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:07.289 [2024-07-24 18:54:44.771061] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:07.289 [2024-07-24 18:54:44.771079] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:07.289 [2024-07-24 18:54:44.771109] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:07.289 [2024-07-24 18:54:44.771117] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:07.289 [2024-07-24 18:54:44.771134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:07.289 [2024-07-24 18:54:44.779115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:07.289 [2024-07-24 18:54:44.779138] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:07.289 [2024-07-24 18:54:44.779147] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:07.289 [2024-07-24 18:54:44.779155] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:07.289 [2024-07-24 18:54:44.779163] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:07.289 [2024-07-24 18:54:44.779171] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:07.289 [2024-07-24 18:54:44.779179] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:07.289 [2024-07-24 18:54:44.779187] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:07.289 [2024-07-24 18:54:44.779200] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:07.289 [2024-07-24 18:54:44.779220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:07.289 [2024-07-24 18:54:44.787129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:07.289 [2024-07-24 18:54:44.787156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.289 [2024-07-24 18:54:44.787170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.289 [2024-07-24 18:54:44.787182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.289 [2024-07-24 18:54:44.787198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.289 [2024-07-24 18:54:44.787207] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:07.289 [2024-07-24 18:54:44.787222] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:07.289 [2024-07-24 18:54:44.787237] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:07.289 [2024-07-24 18:54:44.795125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:07.289 [2024-07-24 18:54:44.795142] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:07.289 [2024-07-24 18:54:44.795152] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:07.289 [2024-07-24 18:54:44.795168] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:07.289 [2024-07-24 18:54:44.795179] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:07.289 [2024-07-24 18:54:44.795193] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:07.289 [2024-07-24 18:54:44.803124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:07.289 [2024-07-24 18:54:44.803198] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:07.289 [2024-07-24 18:54:44.803215] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:07.289 [2024-07-24 18:54:44.803229] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:07.289 [2024-07-24 18:54:44.803237] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:07.289 [2024-07-24 18:54:44.803243] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:07.289 [2024-07-24 18:54:44.803253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:07.289 [2024-07-24 18:54:44.811114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:07.289 [2024-07-24 18:54:44.811136] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:07.289 [2024-07-24 18:54:44.811156] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:07.289 [2024-07-24 18:54:44.811170] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:07.289 [2024-07-24 18:54:44.811183] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:07.289 [2024-07-24 18:54:44.811191] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:07.289 [2024-07-24 18:54:44.811197] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:07.289 [2024-07-24 18:54:44.811207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:07.289 [2024-07-24 18:54:44.819111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:07.289 [2024-07-24 18:54:44.819142] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:07.289 [2024-07-24 18:54:44.819159] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:07.289 [2024-07-24 18:54:44.819172] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:07.289 [2024-07-24 18:54:44.819181] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:07.289 [2024-07-24 18:54:44.819187] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:07.289 [2024-07-24 18:54:44.819196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:07.289 [2024-07-24 18:54:44.827114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:07.289 [2024-07-24 18:54:44.827135] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:07.289 [2024-07-24 18:54:44.827147] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:07.289 [2024-07-24 18:54:44.827163] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:07.289 [2024-07-24 18:54:44.827176] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:07.289 [2024-07-24 18:54:44.827185] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:07.289 [2024-07-24 18:54:44.827193] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:07.289 [2024-07-24 18:54:44.827201] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:07.289 [2024-07-24 18:54:44.827209] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:07.289 [2024-07-24 18:54:44.827217] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:07.289 [2024-07-24 18:54:44.827240] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:07.289 [2024-07-24 18:54:44.835114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:07.289 [2024-07-24 18:54:44.835150] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:07.289 [2024-07-24 18:54:44.843114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:07.289 [2024-07-24 18:54:44.843139] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:07.289 [2024-07-24 18:54:44.851112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:07.289 [2024-07-24 18:54:44.851137] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:07.290 [2024-07-24 18:54:44.859114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:07.290 [2024-07-24 18:54:44.859145] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:07.290 [2024-07-24 18:54:44.859160] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:07.290 [2024-07-24 18:54:44.859166] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:07.290 [2024-07-24 18:54:44.859172] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:07.290 [2024-07-24 18:54:44.859178] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:07.290 [2024-07-24 18:54:44.859188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:07.290 [2024-07-24 18:54:44.859200] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:07.290 [2024-07-24 18:54:44.859208] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:07.290 [2024-07-24 18:54:44.859214] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:07.290 [2024-07-24 18:54:44.859223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:07.290 [2024-07-24 18:54:44.859234] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:07.290 [2024-07-24 18:54:44.859242] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:07.290 [2024-07-24 18:54:44.859248] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:07.290 [2024-07-24 18:54:44.859257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:07.290 [2024-07-24 18:54:44.859268] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:07.290 [2024-07-24 18:54:44.859276] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:07.290 [2024-07-24 18:54:44.859282] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:07.290 [2024-07-24 18:54:44.859291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:07.290 [2024-07-24 18:54:44.867126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:07.290 [2024-07-24 18:54:44.867164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:07.290 [2024-07-24 18:54:44.867183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:07.290 [2024-07-24 18:54:44.867195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:07.290 ===================================================== 00:12:07.290 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:07.290 ===================================================== 00:12:07.290 Controller Capabilities/Features 00:12:07.290 ================================ 00:12:07.290 Vendor ID: 4e58 00:12:07.290 Subsystem Vendor ID: 4e58 00:12:07.290 Serial Number: SPDK2 00:12:07.290 Model Number: SPDK bdev Controller 00:12:07.290 Firmware Version: 24.09 00:12:07.290 Recommended Arb Burst: 6 00:12:07.290 IEEE OUI Identifier: 8d 6b 50 00:12:07.290 Multi-path I/O 00:12:07.290 May have multiple subsystem ports: Yes 00:12:07.290 May have multiple controllers: Yes 00:12:07.290 Associated with SR-IOV VF: No 00:12:07.290 Max Data Transfer Size: 131072 00:12:07.290 Max Number of Namespaces: 32 00:12:07.290 Max Number of I/O Queues: 127 00:12:07.290 NVMe Specification Version (VS): 1.3 00:12:07.290 NVMe Specification Version (Identify): 1.3 00:12:07.290 Maximum Queue Entries: 256 00:12:07.290 Contiguous Queues Required: Yes 00:12:07.290 Arbitration Mechanisms Supported 00:12:07.290 Weighted Round Robin: Not Supported 00:12:07.290 Vendor Specific: Not Supported 00:12:07.290 Reset Timeout: 15000 ms 00:12:07.290 Doorbell Stride: 4 bytes 00:12:07.290 NVM Subsystem Reset: Not Supported 00:12:07.290 Command Sets Supported 00:12:07.290 NVM Command Set: Supported 00:12:07.290 Boot Partition: Not Supported 00:12:07.290 Memory Page Size Minimum: 4096 bytes 00:12:07.290 Memory Page Size Maximum: 4096 bytes 00:12:07.290 Persistent Memory Region: Not Supported 00:12:07.290 Optional Asynchronous Events Supported 00:12:07.290 Namespace Attribute Notices: Supported 00:12:07.290 Firmware Activation Notices: Not Supported 00:12:07.290 ANA Change Notices: Not Supported 00:12:07.290 PLE Aggregate Log Change Notices: Not Supported 00:12:07.290 LBA Status Info Alert Notices: Not Supported 00:12:07.290 EGE Aggregate Log Change Notices: Not Supported 00:12:07.290 Normal NVM Subsystem Shutdown event: Not Supported 00:12:07.290 Zone Descriptor Change Notices: Not Supported 00:12:07.290 Discovery Log Change Notices: Not Supported 00:12:07.290 Controller Attributes 00:12:07.290 128-bit Host Identifier: Supported 00:12:07.290 Non-Operational Permissive Mode: Not Supported 00:12:07.290 NVM Sets: Not Supported 00:12:07.290 Read Recovery Levels: Not Supported 00:12:07.290 Endurance Groups: Not Supported 00:12:07.290 Predictable Latency Mode: Not Supported 00:12:07.290 Traffic Based Keep ALive: Not Supported 00:12:07.290 Namespace Granularity: Not Supported 00:12:07.290 SQ Associations: Not Supported 00:12:07.290 UUID List: Not Supported 00:12:07.290 Multi-Domain Subsystem: Not Supported 00:12:07.290 Fixed Capacity Management: Not Supported 00:12:07.290 Variable Capacity Management: Not Supported 00:12:07.290 Delete Endurance Group: Not Supported 00:12:07.290 Delete NVM Set: Not Supported 00:12:07.290 Extended LBA Formats Supported: Not Supported 00:12:07.290 Flexible Data Placement Supported: Not Supported 00:12:07.290 00:12:07.290 Controller Memory Buffer Support 00:12:07.290 ================================ 00:12:07.290 Supported: No 00:12:07.290 00:12:07.290 Persistent Memory Region Support 00:12:07.290 ================================ 00:12:07.290 Supported: No 00:12:07.290 00:12:07.290 Admin Command Set Attributes 00:12:07.290 ============================ 00:12:07.290 Security Send/Receive: Not Supported 00:12:07.290 Format NVM: Not Supported 00:12:07.290 Firmware Activate/Download: Not Supported 00:12:07.290 Namespace Management: Not Supported 00:12:07.290 Device Self-Test: Not Supported 00:12:07.290 Directives: Not Supported 00:12:07.290 NVMe-MI: Not Supported 00:12:07.290 Virtualization Management: Not Supported 00:12:07.290 Doorbell Buffer Config: Not Supported 00:12:07.290 Get LBA Status Capability: Not Supported 00:12:07.290 Command & Feature Lockdown Capability: Not Supported 00:12:07.290 Abort Command Limit: 4 00:12:07.290 Async Event Request Limit: 4 00:12:07.290 Number of Firmware Slots: N/A 00:12:07.290 Firmware Slot 1 Read-Only: N/A 00:12:07.290 Firmware Activation Without Reset: N/A 00:12:07.290 Multiple Update Detection Support: N/A 00:12:07.290 Firmware Update Granularity: No Information Provided 00:12:07.290 Per-Namespace SMART Log: No 00:12:07.290 Asymmetric Namespace Access Log Page: Not Supported 00:12:07.290 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:07.290 Command Effects Log Page: Supported 00:12:07.290 Get Log Page Extended Data: Supported 00:12:07.290 Telemetry Log Pages: Not Supported 00:12:07.290 Persistent Event Log Pages: Not Supported 00:12:07.290 Supported Log Pages Log Page: May Support 00:12:07.290 Commands Supported & Effects Log Page: Not Supported 00:12:07.290 Feature Identifiers & Effects Log Page:May Support 00:12:07.290 NVMe-MI Commands & Effects Log Page: May Support 00:12:07.290 Data Area 4 for Telemetry Log: Not Supported 00:12:07.290 Error Log Page Entries Supported: 128 00:12:07.290 Keep Alive: Supported 00:12:07.290 Keep Alive Granularity: 10000 ms 00:12:07.290 00:12:07.290 NVM Command Set Attributes 00:12:07.290 ========================== 00:12:07.290 Submission Queue Entry Size 00:12:07.290 Max: 64 00:12:07.290 Min: 64 00:12:07.290 Completion Queue Entry Size 00:12:07.290 Max: 16 00:12:07.290 Min: 16 00:12:07.290 Number of Namespaces: 32 00:12:07.290 Compare Command: Supported 00:12:07.290 Write Uncorrectable Command: Not Supported 00:12:07.291 Dataset Management Command: Supported 00:12:07.291 Write Zeroes Command: Supported 00:12:07.291 Set Features Save Field: Not Supported 00:12:07.291 Reservations: Not Supported 00:12:07.291 Timestamp: Not Supported 00:12:07.291 Copy: Supported 00:12:07.291 Volatile Write Cache: Present 00:12:07.291 Atomic Write Unit (Normal): 1 00:12:07.291 Atomic Write Unit (PFail): 1 00:12:07.291 Atomic Compare & Write Unit: 1 00:12:07.291 Fused Compare & Write: Supported 00:12:07.291 Scatter-Gather List 00:12:07.291 SGL Command Set: Supported (Dword aligned) 00:12:07.291 SGL Keyed: Not Supported 00:12:07.291 SGL Bit Bucket Descriptor: Not Supported 00:12:07.291 SGL Metadata Pointer: Not Supported 00:12:07.291 Oversized SGL: Not Supported 00:12:07.291 SGL Metadata Address: Not Supported 00:12:07.291 SGL Offset: Not Supported 00:12:07.291 Transport SGL Data Block: Not Supported 00:12:07.291 Replay Protected Memory Block: Not Supported 00:12:07.291 00:12:07.291 Firmware Slot Information 00:12:07.291 ========================= 00:12:07.291 Active slot: 1 00:12:07.291 Slot 1 Firmware Revision: 24.09 00:12:07.291 00:12:07.291 00:12:07.291 Commands Supported and Effects 00:12:07.291 ============================== 00:12:07.291 Admin Commands 00:12:07.291 -------------- 00:12:07.291 Get Log Page (02h): Supported 00:12:07.291 Identify (06h): Supported 00:12:07.291 Abort (08h): Supported 00:12:07.291 Set Features (09h): Supported 00:12:07.291 Get Features (0Ah): Supported 00:12:07.291 Asynchronous Event Request (0Ch): Supported 00:12:07.291 Keep Alive (18h): Supported 00:12:07.291 I/O Commands 00:12:07.291 ------------ 00:12:07.291 Flush (00h): Supported LBA-Change 00:12:07.291 Write (01h): Supported LBA-Change 00:12:07.291 Read (02h): Supported 00:12:07.291 Compare (05h): Supported 00:12:07.291 Write Zeroes (08h): Supported LBA-Change 00:12:07.291 Dataset Management (09h): Supported LBA-Change 00:12:07.291 Copy (19h): Supported LBA-Change 00:12:07.291 00:12:07.291 Error Log 00:12:07.291 ========= 00:12:07.291 00:12:07.291 Arbitration 00:12:07.291 =========== 00:12:07.291 Arbitration Burst: 1 00:12:07.291 00:12:07.291 Power Management 00:12:07.291 ================ 00:12:07.291 Number of Power States: 1 00:12:07.291 Current Power State: Power State #0 00:12:07.291 Power State #0: 00:12:07.291 Max Power: 0.00 W 00:12:07.291 Non-Operational State: Operational 00:12:07.291 Entry Latency: Not Reported 00:12:07.291 Exit Latency: Not Reported 00:12:07.291 Relative Read Throughput: 0 00:12:07.291 Relative Read Latency: 0 00:12:07.291 Relative Write Throughput: 0 00:12:07.291 Relative Write Latency: 0 00:12:07.291 Idle Power: Not Reported 00:12:07.291 Active Power: Not Reported 00:12:07.291 Non-Operational Permissive Mode: Not Supported 00:12:07.291 00:12:07.291 Health Information 00:12:07.291 ================== 00:12:07.291 Critical Warnings: 00:12:07.291 Available Spare Space: OK 00:12:07.291 Temperature: OK 00:12:07.291 Device Reliability: OK 00:12:07.291 Read Only: No 00:12:07.291 Volatile Memory Backup: OK 00:12:07.291 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:07.291 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:07.291 Available Spare: 0% 00:12:07.291 Available Sp[2024-07-24 18:54:44.867310] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:07.291 [2024-07-24 18:54:44.875111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:07.291 [2024-07-24 18:54:44.875158] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:07.291 [2024-07-24 18:54:44.875176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.291 [2024-07-24 18:54:44.875187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.291 [2024-07-24 18:54:44.875197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.291 [2024-07-24 18:54:44.875207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.291 [2024-07-24 18:54:44.875272] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:07.291 [2024-07-24 18:54:44.875296] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:07.291 [2024-07-24 18:54:44.876276] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:07.291 [2024-07-24 18:54:44.876349] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:07.291 [2024-07-24 18:54:44.876363] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:07.291 [2024-07-24 18:54:44.877287] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:07.291 [2024-07-24 18:54:44.877311] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:07.291 [2024-07-24 18:54:44.877362] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:07.291 [2024-07-24 18:54:44.878575] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:07.549 are Threshold: 0% 00:12:07.549 Life Percentage Used: 0% 00:12:07.549 Data Units Read: 0 00:12:07.549 Data Units Written: 0 00:12:07.549 Host Read Commands: 0 00:12:07.549 Host Write Commands: 0 00:12:07.549 Controller Busy Time: 0 minutes 00:12:07.549 Power Cycles: 0 00:12:07.549 Power On Hours: 0 hours 00:12:07.549 Unsafe Shutdowns: 0 00:12:07.549 Unrecoverable Media Errors: 0 00:12:07.549 Lifetime Error Log Entries: 0 00:12:07.549 Warning Temperature Time: 0 minutes 00:12:07.549 Critical Temperature Time: 0 minutes 00:12:07.549 00:12:07.549 Number of Queues 00:12:07.549 ================ 00:12:07.549 Number of I/O Submission Queues: 127 00:12:07.549 Number of I/O Completion Queues: 127 00:12:07.549 00:12:07.549 Active Namespaces 00:12:07.549 ================= 00:12:07.549 Namespace ID:1 00:12:07.549 Error Recovery Timeout: Unlimited 00:12:07.549 Command Set Identifier: NVM (00h) 00:12:07.549 Deallocate: Supported 00:12:07.549 Deallocated/Unwritten Error: Not Supported 00:12:07.549 Deallocated Read Value: Unknown 00:12:07.549 Deallocate in Write Zeroes: Not Supported 00:12:07.549 Deallocated Guard Field: 0xFFFF 00:12:07.549 Flush: Supported 00:12:07.549 Reservation: Supported 00:12:07.549 Namespace Sharing Capabilities: Multiple Controllers 00:12:07.549 Size (in LBAs): 131072 (0GiB) 00:12:07.549 Capacity (in LBAs): 131072 (0GiB) 00:12:07.549 Utilization (in LBAs): 131072 (0GiB) 00:12:07.549 NGUID: 06F1E65AB0A54A0488BC7E7ADF765AD3 00:12:07.549 UUID: 06f1e65a-b0a5-4a04-88bc-7e7adf765ad3 00:12:07.549 Thin Provisioning: Not Supported 00:12:07.549 Per-NS Atomic Units: Yes 00:12:07.549 Atomic Boundary Size (Normal): 0 00:12:07.549 Atomic Boundary Size (PFail): 0 00:12:07.549 Atomic Boundary Offset: 0 00:12:07.549 Maximum Single Source Range Length: 65535 00:12:07.549 Maximum Copy Length: 65535 00:12:07.549 Maximum Source Range Count: 1 00:12:07.549 NGUID/EUI64 Never Reused: No 00:12:07.549 Namespace Write Protected: No 00:12:07.549 Number of LBA Formats: 1 00:12:07.549 Current LBA Format: LBA Format #00 00:12:07.549 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:07.549 00:12:07.549 18:54:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:07.549 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.549 [2024-07-24 18:54:45.106911] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:12.821 Initializing NVMe Controllers 00:12:12.821 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:12.821 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:12.821 Initialization complete. Launching workers. 00:12:12.821 ======================================================== 00:12:12.821 Latency(us) 00:12:12.821 Device Information : IOPS MiB/s Average min max 00:12:12.821 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33492.60 130.83 3822.28 1189.41 9953.55 00:12:12.821 ======================================================== 00:12:12.821 Total : 33492.60 130.83 3822.28 1189.41 9953.55 00:12:12.821 00:12:12.821 [2024-07-24 18:54:50.213478] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:12.821 18:54:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:12.821 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.080 [2024-07-24 18:54:50.457209] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:18.352 Initializing NVMe Controllers 00:12:18.352 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:18.352 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:18.352 Initialization complete. Launching workers. 00:12:18.352 ======================================================== 00:12:18.352 Latency(us) 00:12:18.352 Device Information : IOPS MiB/s Average min max 00:12:18.352 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30499.40 119.14 4198.07 1225.45 8330.26 00:12:18.352 ======================================================== 00:12:18.352 Total : 30499.40 119.14 4198.07 1225.45 8330.26 00:12:18.352 00:12:18.352 [2024-07-24 18:54:55.478730] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:18.352 18:54:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:18.352 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.352 [2024-07-24 18:54:55.691637] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:23.627 [2024-07-24 18:55:00.822240] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:23.627 Initializing NVMe Controllers 00:12:23.627 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:23.627 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:23.627 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:23.627 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:23.627 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:23.627 Initialization complete. Launching workers. 00:12:23.627 Starting thread on core 2 00:12:23.627 Starting thread on core 3 00:12:23.627 Starting thread on core 1 00:12:23.627 18:55:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:23.627 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.627 [2024-07-24 18:55:01.114485] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:26.909 [2024-07-24 18:55:04.175143] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:26.909 Initializing NVMe Controllers 00:12:26.909 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:26.909 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:26.909 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:26.909 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:26.909 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:26.909 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:26.909 Initialization complete. Launching workers. 00:12:26.909 Starting thread on core 1 with urgent priority queue 00:12:26.909 Starting thread on core 2 with urgent priority queue 00:12:26.909 Starting thread on core 3 with urgent priority queue 00:12:26.909 Starting thread on core 0 with urgent priority queue 00:12:26.909 SPDK bdev Controller (SPDK2 ) core 0: 4975.33 IO/s 20.10 secs/100000 ios 00:12:26.909 SPDK bdev Controller (SPDK2 ) core 1: 5303.00 IO/s 18.86 secs/100000 ios 00:12:26.909 SPDK bdev Controller (SPDK2 ) core 2: 5575.00 IO/s 17.94 secs/100000 ios 00:12:26.909 SPDK bdev Controller (SPDK2 ) core 3: 4647.33 IO/s 21.52 secs/100000 ios 00:12:26.909 ======================================================== 00:12:26.909 00:12:26.909 18:55:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:26.909 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.909 [2024-07-24 18:55:04.477050] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:26.909 Initializing NVMe Controllers 00:12:26.909 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:26.909 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:26.909 Namespace ID: 1 size: 0GB 00:12:26.909 Initialization complete. 00:12:26.909 INFO: using host memory buffer for IO 00:12:26.909 Hello world! 00:12:26.909 [2024-07-24 18:55:04.492161] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:27.166 18:55:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:27.166 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.423 [2024-07-24 18:55:04.795734] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:28.357 Initializing NVMe Controllers 00:12:28.357 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:28.357 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:28.357 Initialization complete. Launching workers. 00:12:28.357 submit (in ns) avg, min, max = 8501.9, 3546.7, 4014596.7 00:12:28.357 complete (in ns) avg, min, max = 23903.4, 2057.8, 4013652.2 00:12:28.357 00:12:28.357 Submit histogram 00:12:28.357 ================ 00:12:28.357 Range in us Cumulative Count 00:12:28.357 3.532 - 3.556: 0.1378% ( 18) 00:12:28.357 3.556 - 3.579: 4.7856% ( 607) 00:12:28.357 3.579 - 3.603: 13.1853% ( 1097) 00:12:28.357 3.603 - 3.627: 25.2067% ( 1570) 00:12:28.357 3.627 - 3.650: 35.7121% ( 1372) 00:12:28.357 3.650 - 3.674: 43.3308% ( 995) 00:12:28.357 3.674 - 3.698: 48.3997% ( 662) 00:12:28.357 3.698 - 3.721: 52.7565% ( 569) 00:12:28.357 3.721 - 3.745: 56.8147% ( 530) 00:12:28.357 3.745 - 3.769: 60.7887% ( 519) 00:12:28.357 3.769 - 3.793: 64.4946% ( 484) 00:12:28.357 3.793 - 3.816: 67.9862% ( 456) 00:12:28.357 3.816 - 3.840: 72.4426% ( 582) 00:12:28.357 3.840 - 3.864: 77.3201% ( 637) 00:12:28.357 3.864 - 3.887: 81.4931% ( 545) 00:12:28.357 3.887 - 3.911: 84.4946% ( 392) 00:12:28.357 3.911 - 3.935: 86.8147% ( 303) 00:12:28.357 3.935 - 3.959: 88.4227% ( 210) 00:12:28.357 3.959 - 3.982: 89.7703% ( 176) 00:12:28.357 3.982 - 4.006: 91.0107% ( 162) 00:12:28.357 4.006 - 4.030: 92.0750% ( 139) 00:12:28.357 4.030 - 4.053: 93.0781% ( 131) 00:12:28.357 4.053 - 4.077: 94.0276% ( 124) 00:12:28.357 4.077 - 4.101: 95.0153% ( 129) 00:12:28.357 4.101 - 4.124: 95.6202% ( 79) 00:12:28.357 4.124 - 4.148: 95.9801% ( 47) 00:12:28.357 4.148 - 4.172: 96.3247% ( 45) 00:12:28.357 4.172 - 4.196: 96.5544% ( 30) 00:12:28.357 4.196 - 4.219: 96.7994% ( 32) 00:12:28.357 4.219 - 4.243: 96.9525% ( 20) 00:12:28.357 4.243 - 4.267: 97.0827% ( 17) 00:12:28.357 4.267 - 4.290: 97.2358% ( 20) 00:12:28.357 4.290 - 4.314: 97.3047% ( 9) 00:12:28.357 4.314 - 4.338: 97.3813% ( 10) 00:12:28.357 4.338 - 4.361: 97.4655% ( 11) 00:12:28.357 4.361 - 4.385: 97.5268% ( 8) 00:12:28.357 4.385 - 4.409: 97.5498% ( 3) 00:12:28.357 4.409 - 4.433: 97.5957% ( 6) 00:12:28.357 4.433 - 4.456: 97.6340% ( 5) 00:12:28.357 4.456 - 4.480: 97.6723% ( 5) 00:12:28.357 4.480 - 4.504: 97.6799% ( 1) 00:12:28.357 4.504 - 4.527: 97.6876% ( 1) 00:12:28.357 4.527 - 4.551: 97.7106% ( 3) 00:12:28.357 4.646 - 4.670: 97.7335% ( 3) 00:12:28.357 4.670 - 4.693: 97.7412% ( 1) 00:12:28.357 4.693 - 4.717: 97.7489% ( 1) 00:12:28.357 4.764 - 4.788: 97.7642% ( 2) 00:12:28.357 4.788 - 4.812: 97.7871% ( 3) 00:12:28.357 4.812 - 4.836: 97.8101% ( 3) 00:12:28.357 4.836 - 4.859: 97.8560% ( 6) 00:12:28.357 4.859 - 4.883: 97.9326% ( 10) 00:12:28.357 4.883 - 4.907: 97.9786% ( 6) 00:12:28.357 4.907 - 4.930: 98.0475% ( 9) 00:12:28.357 4.930 - 4.954: 98.0781% ( 4) 00:12:28.357 4.954 - 4.978: 98.1394% ( 8) 00:12:28.357 4.978 - 5.001: 98.1776% ( 5) 00:12:28.357 5.001 - 5.025: 98.2006% ( 3) 00:12:28.357 5.025 - 5.049: 98.2236% ( 3) 00:12:28.357 5.049 - 5.073: 98.2466% ( 3) 00:12:28.357 5.073 - 5.096: 98.2772% ( 4) 00:12:28.357 5.096 - 5.120: 98.3078% ( 4) 00:12:28.357 5.120 - 5.144: 98.3614% ( 7) 00:12:28.357 5.144 - 5.167: 98.3767% ( 2) 00:12:28.357 5.167 - 5.191: 98.4074% ( 4) 00:12:28.357 5.191 - 5.215: 98.4227% ( 2) 00:12:28.357 5.262 - 5.286: 98.4456% ( 3) 00:12:28.357 5.286 - 5.310: 98.4533% ( 1) 00:12:28.357 5.310 - 5.333: 98.4686% ( 2) 00:12:28.357 5.333 - 5.357: 98.4763% ( 1) 00:12:28.357 5.381 - 5.404: 98.4916% ( 2) 00:12:28.357 5.404 - 5.428: 98.5069% ( 2) 00:12:28.357 5.523 - 5.547: 98.5222% ( 2) 00:12:28.357 5.594 - 5.618: 98.5299% ( 1) 00:12:28.357 5.641 - 5.665: 98.5452% ( 2) 00:12:28.357 5.689 - 5.713: 98.5528% ( 1) 00:12:28.357 5.736 - 5.760: 98.5605% ( 1) 00:12:28.357 5.831 - 5.855: 98.5681% ( 1) 00:12:28.357 6.021 - 6.044: 98.5758% ( 1) 00:12:28.357 6.116 - 6.163: 98.5835% ( 1) 00:12:28.357 6.353 - 6.400: 98.5911% ( 1) 00:12:28.357 6.684 - 6.732: 98.5988% ( 1) 00:12:28.357 6.827 - 6.874: 98.6064% ( 1) 00:12:28.357 7.016 - 7.064: 98.6141% ( 1) 00:12:28.357 7.111 - 7.159: 98.6294% ( 2) 00:12:28.357 7.301 - 7.348: 98.6371% ( 1) 00:12:28.357 7.396 - 7.443: 98.6447% ( 1) 00:12:28.357 7.538 - 7.585: 98.6677% ( 3) 00:12:28.357 7.633 - 7.680: 98.6753% ( 1) 00:12:28.357 7.727 - 7.775: 98.6830% ( 1) 00:12:28.357 7.822 - 7.870: 98.6907% ( 1) 00:12:28.357 7.870 - 7.917: 98.6983% ( 1) 00:12:28.357 7.917 - 7.964: 98.7060% ( 1) 00:12:28.357 7.964 - 8.012: 98.7366% ( 4) 00:12:28.357 8.012 - 8.059: 98.7443% ( 1) 00:12:28.357 8.107 - 8.154: 98.7596% ( 2) 00:12:28.357 8.154 - 8.201: 98.7672% ( 1) 00:12:28.357 8.296 - 8.344: 98.7825% ( 2) 00:12:28.357 8.344 - 8.391: 98.7979% ( 2) 00:12:28.357 8.391 - 8.439: 98.8055% ( 1) 00:12:28.357 8.581 - 8.628: 98.8285% ( 3) 00:12:28.357 8.628 - 8.676: 98.8361% ( 1) 00:12:28.357 8.770 - 8.818: 98.8438% ( 1) 00:12:28.357 8.865 - 8.913: 98.8591% ( 2) 00:12:28.357 8.913 - 8.960: 98.8668% ( 1) 00:12:28.357 9.481 - 9.529: 98.8744% ( 1) 00:12:28.357 9.766 - 9.813: 98.8821% ( 1) 00:12:28.357 9.861 - 9.908: 98.8897% ( 1) 00:12:28.357 9.908 - 9.956: 98.8974% ( 1) 00:12:28.357 10.003 - 10.050: 98.9051% ( 1) 00:12:28.357 10.287 - 10.335: 98.9127% ( 1) 00:12:28.357 10.430 - 10.477: 98.9204% ( 1) 00:12:28.357 11.141 - 11.188: 98.9280% ( 1) 00:12:28.357 11.520 - 11.567: 98.9357% ( 1) 00:12:28.357 11.662 - 11.710: 98.9433% ( 1) 00:12:28.357 13.559 - 13.653: 98.9510% ( 1) 00:12:28.357 13.748 - 13.843: 98.9587% ( 1) 00:12:28.357 13.843 - 13.938: 98.9663% ( 1) 00:12:28.357 13.938 - 14.033: 98.9740% ( 1) 00:12:28.357 14.222 - 14.317: 98.9816% ( 1) 00:12:28.357 14.981 - 15.076: 98.9893% ( 1) 00:12:28.357 16.972 - 17.067: 99.0046% ( 2) 00:12:28.357 17.161 - 17.256: 99.0199% ( 2) 00:12:28.357 17.256 - 17.351: 99.0352% ( 2) 00:12:28.357 17.351 - 17.446: 99.0812% ( 6) 00:12:28.357 17.446 - 17.541: 99.1041% ( 3) 00:12:28.357 17.541 - 17.636: 99.1194% ( 2) 00:12:28.357 17.636 - 17.730: 99.1501% ( 4) 00:12:28.357 17.730 - 17.825: 99.1960% ( 6) 00:12:28.357 17.825 - 17.920: 99.2573% ( 8) 00:12:28.357 17.920 - 18.015: 99.3109% ( 7) 00:12:28.357 18.015 - 18.110: 99.3338% ( 3) 00:12:28.357 18.110 - 18.204: 99.4028% ( 9) 00:12:28.357 18.204 - 18.299: 99.4717% ( 9) 00:12:28.357 18.299 - 18.394: 99.5176% ( 6) 00:12:28.357 18.394 - 18.489: 99.5712% ( 7) 00:12:28.357 18.489 - 18.584: 99.6631% ( 12) 00:12:28.357 18.584 - 18.679: 99.6861% ( 3) 00:12:28.357 18.679 - 18.773: 99.7243% ( 5) 00:12:28.357 18.773 - 18.868: 99.7779% ( 7) 00:12:28.357 18.868 - 18.963: 99.7933% ( 2) 00:12:28.357 18.963 - 19.058: 99.8086% ( 2) 00:12:28.357 19.153 - 19.247: 99.8162% ( 1) 00:12:28.357 19.247 - 19.342: 99.8239% ( 1) 00:12:28.357 19.342 - 19.437: 99.8392% ( 2) 00:12:28.357 19.532 - 19.627: 99.8469% ( 1) 00:12:28.357 19.627 - 19.721: 99.8545% ( 1) 00:12:28.357 19.721 - 19.816: 99.8622% ( 1) 00:12:28.357 20.196 - 20.290: 99.8698% ( 1) 00:12:28.357 20.290 - 20.385: 99.8775% ( 1) 00:12:28.357 22.187 - 22.281: 99.8851% ( 1) 00:12:28.357 3980.705 - 4004.978: 99.9923% ( 14) 00:12:28.357 4004.978 - 4029.250: 100.0000% ( 1) 00:12:28.357 00:12:28.357 Complete histogram 00:12:28.357 ================== 00:12:28.357 Range in us Cumulative Count 00:12:28.357 2.050 - 2.062: 0.3063% ( 40) 00:12:28.357 2.062 - 2.074: 23.4533% ( 3023) 00:12:28.357 2.074 - 2.086: 41.8377% ( 2401) 00:12:28.357 2.086 - 2.098: 44.1041% ( 296) 00:12:28.357 2.098 - 2.110: 55.3982% ( 1475) 00:12:28.357 2.110 - 2.121: 59.0352% ( 475) 00:12:28.357 2.121 - 2.133: 61.3859% ( 307) 00:12:28.357 2.133 - 2.145: 70.3982% ( 1177) 00:12:28.357 2.145 - 2.157: 74.0276% ( 474) 00:12:28.357 2.157 - 2.169: 75.8882% ( 243) 00:12:28.357 2.169 - 2.181: 80.2221% ( 566) 00:12:28.357 2.181 - 2.193: 81.8147% ( 208) 00:12:28.358 2.193 - 2.204: 82.7489% ( 122) 00:12:28.358 2.204 - 2.216: 86.5926% ( 502) 00:12:28.358 2.216 - 2.228: 89.8162% ( 421) 00:12:28.358 2.228 - 2.240: 91.2557% ( 188) 00:12:28.358 2.240 - 2.252: 93.0704% ( 237) 00:12:28.358 2.252 - 2.264: 93.7749% ( 92) 00:12:28.358 2.264 - 2.276: 94.0735% ( 39) 00:12:28.358 2.276 - 2.287: 94.3874% ( 41) 00:12:28.358 2.287 - 2.299: 95.0689% ( 89) 00:12:28.358 2.299 - 2.311: 95.5972% ( 69) 00:12:28.358 2.311 - 2.323: 95.7887% ( 25) 00:12:28.358 2.323 - 2.335: 95.8576% ( 9) 00:12:28.358 2.335 - 2.347: 95.8882% ( 4) 00:12:28.358 2.347 - 2.359: 96.0031% ( 15) 00:12:28.358 2.359 - 2.370: 96.3093% ( 40) 00:12:28.358 2.370 - 2.382: 96.6462% ( 44) 00:12:28.358 2.382 - 2.394: 96.9602% ( 41) 00:12:28.358 2.394 - 2.406: 97.1593% ( 26) 00:12:28.358 2.406 - 2.418: 97.3737% ( 28) 00:12:28.358 2.418 - 2.430: 97.5421% ( 22) 00:12:28.358 2.430 - 2.441: 97.6646% ( 16) 00:12:28.358 2.441 - 2.453: 97.7565% ( 12) 00:12:28.358 2.453 - 2.465: 97.9020% ( 19) 00:12:28.358 2.465 - 2.477: 98.0168% ( 15) 00:12:28.358 2.477 - 2.489: 98.1164% ( 13) 00:12:28.358 2.489 - 2.501: 98.1930% ( 10) 00:12:28.358 2.501 - 2.513: 98.2389% ( 6) 00:12:28.358 2.513 - 2.524: 98.2772% ( 5) 00:12:28.358 2.524 - 2.536: 98.2925% ( 2) 00:12:28.358 2.536 - 2.548: 98.3155% ( 3) 00:12:28.358 2.548 - 2.560: 98.3231% ( 1) 00:12:28.358 2.560 - 2.572: 98.3384% ( 2) 00:12:28.358 2.584 - 2.596: 98.3461% ( 1) 00:12:28.358 2.619 - 2.631: 98.3538% ( 1) 00:12:28.358 2.631 - 2.643: 98.3614% ( 1) 00:12:28.358 2.643 - 2.655: 98.3691% ( 1) 00:12:28.358 2.667 - 2.679: 98.3767% ( 1) 00:12:28.358 2.690 - 2.702: 98.3844% ( 1) 00:12:28.358 2.702 - 2.714: 98.3997% ( 2) 00:12:28.358 2.726 - 2.738: 98.4074% ( 1) 00:12:28.358 3.224 - 3.247: 98.4150% ( 1) 00:12:28.358 3.342 - 3.366: 98.4303% ( 2) 00:12:28.358 3.366 - 3.390: 98.4456% ( 2) 00:12:28.358 3.390 - 3.413: 98.4609% ( 2) 00:12:28.358 3.437 - 3.461: 98.4916% ( 4) 00:12:28.358 3.461 - 3.484: 98.5069% ( 2) 00:12:28.358 3.484 - 3.508: 98.5299% ( 3) 00:12:28.358 3.508 - 3.532: 98.5452% ( 2) 00:12:28.358 3.532 - 3.556: 98.5528% ( 1) 00:12:28.358 3.556 - 3.579: 98.5605% ( 1) 00:12:28.358 3.579 - 3.603: 98.5758% ( 2) 00:12:28.358 3.674 - 3.698: 98.5911% ( 2) 00:12:28.358 3.793 - 3.816: 98.5988% ( 1) 00:12:28.358 3.816 - 3.840: 98.6217% ( 3) 00:12:28.358 3.840 - 3.864: 98.6371% ( 2) 00:12:28.358 4.006 - 4.030: 98.6600% ( 3) 00:12:28.358 4.124 - 4.148: 98.6677% ( 1) 00:12:28.358 4.172 - 4.196: 98.6753% ( 1) 00:12:28.358 5.120 - 5.144: 98.6830% ( 1) 00:12:28.358 5.381 - 5.404: 98.6907% ( 1) 00:12:28.358 5.499 - 5.523: 98.6983% ( 1) 00:12:28.358 5.689 - 5.713: 98.7060% ( 1) 00:12:28.358 5.713 - 5.736: 98.7136% ( 1) 00:12:28.358 5.760 - 5.784: 98.7213% ( 1) 00:12:28.358 5.855 - 5.879: 98.7289% ( 1) 00:12:28.358 6.044 - 6.068: 98.7366% ( 1) 00:12:28.358 6.116 - 6.163: 98.7596% ( 3) 00:12:28.358 6.210 - 6.258: 98.7749% ( 2) 00:12:28.358 6.258 - 6.305: 98.7902% ( 2) 00:12:28.358 6.400 - 6.447: 98.7979% ( 1) 00:12:28.358 6.447 - 6.495: 98.8055% ( 1) 00:12:28.358 6.590 - 6.637: 98.8208% ( 2) 00:12:28.358 6.684 - 6.732: 98.8285% ( 1) 00:12:28.358 6.732 - 6.779: 98.8361% ( 1) 00:12:28.358 6.969 - 7.016: 98.8515% ( 2) 00:12:28.358 7.490 - 7.538: 98.8591% ( 1) 00:12:28.358 9.292 - 9.339: 98.8668% ( 1) 00:12:28.358 9.766 - 9.813: 98.8744% ( 1) 00:12:28.358 12.800 - 12.895: 98.8821% ( 1) 00:12:28.358 15.550 - 15.644: 98.8974% ( 2) 00:12:28.358 15.644 - 15.739: 98.9127% ( 2) 00:12:28.358 15.739 - 15.834: 98.9357% ( 3) 00:12:28.358 15.929 - 16.024: 98.9587% ( 3) 00:12:28.358 16.024 - 16.119: 99.0046% ( 6) 00:12:28.358 16.119 - 16.213: 99.0352% ( 4) 00:12:28.358 16.213 - 16.308: 99.0658% ( 4) 00:12:28.358 16.308 - 16.403: 99.0965% ( 4) 00:12:28.358 16.403 - 16.498: 9[2024-07-24 18:55:05.896827] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:28.358 9.1654% ( 9) 00:12:28.358 16.498 - 16.593: 99.2037% ( 5) 00:12:28.358 16.593 - 16.687: 99.2420% ( 5) 00:12:28.358 16.687 - 16.782: 99.2573% ( 2) 00:12:28.358 16.782 - 16.877: 99.3032% ( 6) 00:12:28.358 16.877 - 16.972: 99.3568% ( 7) 00:12:28.358 16.972 - 17.067: 99.3798% ( 3) 00:12:28.358 17.161 - 17.256: 99.3951% ( 2) 00:12:28.358 17.256 - 17.351: 99.4104% ( 2) 00:12:28.358 17.351 - 17.446: 99.4257% ( 2) 00:12:28.358 18.110 - 18.204: 99.4334% ( 1) 00:12:28.358 18.394 - 18.489: 99.4410% ( 1) 00:12:28.358 18.679 - 18.773: 99.4487% ( 1) 00:12:28.358 25.031 - 25.221: 99.4564% ( 1) 00:12:28.358 3228.255 - 3252.527: 99.4640% ( 1) 00:12:28.358 3980.705 - 4004.978: 99.9617% ( 65) 00:12:28.358 4004.978 - 4029.250: 100.0000% ( 5) 00:12:28.358 00:12:28.358 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:28.358 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:28.358 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:28.358 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:28.358 18:55:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:28.616 [ 00:12:28.616 { 00:12:28.616 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:28.616 "subtype": "Discovery", 00:12:28.616 "listen_addresses": [], 00:12:28.616 "allow_any_host": true, 00:12:28.616 "hosts": [] 00:12:28.616 }, 00:12:28.616 { 00:12:28.616 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:28.616 "subtype": "NVMe", 00:12:28.616 "listen_addresses": [ 00:12:28.616 { 00:12:28.616 "trtype": "VFIOUSER", 00:12:28.616 "adrfam": "IPv4", 00:12:28.616 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:28.616 "trsvcid": "0" 00:12:28.616 } 00:12:28.616 ], 00:12:28.616 "allow_any_host": true, 00:12:28.616 "hosts": [], 00:12:28.616 "serial_number": "SPDK1", 00:12:28.616 "model_number": "SPDK bdev Controller", 00:12:28.616 "max_namespaces": 32, 00:12:28.616 "min_cntlid": 1, 00:12:28.616 "max_cntlid": 65519, 00:12:28.616 "namespaces": [ 00:12:28.616 { 00:12:28.616 "nsid": 1, 00:12:28.616 "bdev_name": "Malloc1", 00:12:28.616 "name": "Malloc1", 00:12:28.616 "nguid": "755B06B8757D45A999F087852DB90F48", 00:12:28.616 "uuid": "755b06b8-757d-45a9-99f0-87852db90f48" 00:12:28.616 }, 00:12:28.616 { 00:12:28.616 "nsid": 2, 00:12:28.616 "bdev_name": "Malloc3", 00:12:28.616 "name": "Malloc3", 00:12:28.616 "nguid": "680857F4862649719282240F4A6B44BF", 00:12:28.616 "uuid": "680857f4-8626-4971-9282-240f4a6b44bf" 00:12:28.616 } 00:12:28.616 ] 00:12:28.616 }, 00:12:28.616 { 00:12:28.616 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:28.616 "subtype": "NVMe", 00:12:28.616 "listen_addresses": [ 00:12:28.616 { 00:12:28.616 "trtype": "VFIOUSER", 00:12:28.616 "adrfam": "IPv4", 00:12:28.616 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:28.616 "trsvcid": "0" 00:12:28.616 } 00:12:28.616 ], 00:12:28.616 "allow_any_host": true, 00:12:28.616 "hosts": [], 00:12:28.616 "serial_number": "SPDK2", 00:12:28.616 "model_number": "SPDK bdev Controller", 00:12:28.616 "max_namespaces": 32, 00:12:28.616 "min_cntlid": 1, 00:12:28.616 "max_cntlid": 65519, 00:12:28.616 "namespaces": [ 00:12:28.616 { 00:12:28.616 "nsid": 1, 00:12:28.616 "bdev_name": "Malloc2", 00:12:28.616 "name": "Malloc2", 00:12:28.616 "nguid": "06F1E65AB0A54A0488BC7E7ADF765AD3", 00:12:28.616 "uuid": "06f1e65a-b0a5-4a04-88bc-7e7adf765ad3" 00:12:28.616 } 00:12:28.616 ] 00:12:28.616 } 00:12:28.616 ] 00:12:28.616 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:28.616 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3129621 00:12:28.616 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:28.616 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:28.616 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:28.616 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:28.616 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:28.616 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:28.616 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:28.616 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:28.874 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.874 [2024-07-24 18:55:06.361554] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:28.874 Malloc4 00:12:29.131 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:29.131 [2024-07-24 18:55:06.700026] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:29.131 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:29.388 Asynchronous Event Request test 00:12:29.388 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:29.388 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:29.388 Registering asynchronous event callbacks... 00:12:29.388 Starting namespace attribute notice tests for all controllers... 00:12:29.388 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:29.388 aer_cb - Changed Namespace 00:12:29.388 Cleaning up... 00:12:29.388 [ 00:12:29.388 { 00:12:29.388 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:29.388 "subtype": "Discovery", 00:12:29.388 "listen_addresses": [], 00:12:29.388 "allow_any_host": true, 00:12:29.388 "hosts": [] 00:12:29.388 }, 00:12:29.388 { 00:12:29.388 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:29.388 "subtype": "NVMe", 00:12:29.388 "listen_addresses": [ 00:12:29.388 { 00:12:29.388 "trtype": "VFIOUSER", 00:12:29.388 "adrfam": "IPv4", 00:12:29.388 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:29.388 "trsvcid": "0" 00:12:29.388 } 00:12:29.388 ], 00:12:29.388 "allow_any_host": true, 00:12:29.388 "hosts": [], 00:12:29.388 "serial_number": "SPDK1", 00:12:29.388 "model_number": "SPDK bdev Controller", 00:12:29.388 "max_namespaces": 32, 00:12:29.388 "min_cntlid": 1, 00:12:29.388 "max_cntlid": 65519, 00:12:29.388 "namespaces": [ 00:12:29.388 { 00:12:29.388 "nsid": 1, 00:12:29.388 "bdev_name": "Malloc1", 00:12:29.388 "name": "Malloc1", 00:12:29.388 "nguid": "755B06B8757D45A999F087852DB90F48", 00:12:29.388 "uuid": "755b06b8-757d-45a9-99f0-87852db90f48" 00:12:29.388 }, 00:12:29.388 { 00:12:29.388 "nsid": 2, 00:12:29.388 "bdev_name": "Malloc3", 00:12:29.389 "name": "Malloc3", 00:12:29.389 "nguid": "680857F4862649719282240F4A6B44BF", 00:12:29.389 "uuid": "680857f4-8626-4971-9282-240f4a6b44bf" 00:12:29.389 } 00:12:29.389 ] 00:12:29.389 }, 00:12:29.389 { 00:12:29.389 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:29.389 "subtype": "NVMe", 00:12:29.389 "listen_addresses": [ 00:12:29.389 { 00:12:29.389 "trtype": "VFIOUSER", 00:12:29.389 "adrfam": "IPv4", 00:12:29.389 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:29.389 "trsvcid": "0" 00:12:29.389 } 00:12:29.389 ], 00:12:29.389 "allow_any_host": true, 00:12:29.389 "hosts": [], 00:12:29.389 "serial_number": "SPDK2", 00:12:29.389 "model_number": "SPDK bdev Controller", 00:12:29.389 "max_namespaces": 32, 00:12:29.389 "min_cntlid": 1, 00:12:29.389 "max_cntlid": 65519, 00:12:29.389 "namespaces": [ 00:12:29.389 { 00:12:29.389 "nsid": 1, 00:12:29.389 "bdev_name": "Malloc2", 00:12:29.389 "name": "Malloc2", 00:12:29.389 "nguid": "06F1E65AB0A54A0488BC7E7ADF765AD3", 00:12:29.389 "uuid": "06f1e65a-b0a5-4a04-88bc-7e7adf765ad3" 00:12:29.389 }, 00:12:29.389 { 00:12:29.389 "nsid": 2, 00:12:29.389 "bdev_name": "Malloc4", 00:12:29.389 "name": "Malloc4", 00:12:29.389 "nguid": "E41F5994DCCD4E94B40D063C80031DB6", 00:12:29.389 "uuid": "e41f5994-dccd-4e94-b40d-063c80031db6" 00:12:29.389 } 00:12:29.389 ] 00:12:29.389 } 00:12:29.389 ] 00:12:29.389 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3129621 00:12:29.389 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:29.389 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3124027 00:12:29.389 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3124027 ']' 00:12:29.389 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3124027 00:12:29.389 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:12:29.389 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:29.389 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3124027 00:12:29.389 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:29.389 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:29.389 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3124027' 00:12:29.389 killing process with pid 3124027 00:12:29.389 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3124027 00:12:29.389 18:55:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3124027 00:12:29.955 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:29.955 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:29.955 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:29.955 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:29.955 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:29.955 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3129763 00:12:29.955 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:29.955 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3129763' 00:12:29.955 Process pid: 3129763 00:12:29.955 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:29.955 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3129763 00:12:29.955 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3129763 ']' 00:12:29.955 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.955 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:29.955 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.955 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:29.955 18:55:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:29.955 [2024-07-24 18:55:07.420327] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:29.955 [2024-07-24 18:55:07.421368] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:12:29.955 [2024-07-24 18:55:07.421455] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.955 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.955 [2024-07-24 18:55:07.482683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:30.213 [2024-07-24 18:55:07.598404] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.213 [2024-07-24 18:55:07.598461] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.213 [2024-07-24 18:55:07.598488] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.213 [2024-07-24 18:55:07.598501] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.213 [2024-07-24 18:55:07.598513] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.213 [2024-07-24 18:55:07.598596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.213 [2024-07-24 18:55:07.598662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.213 [2024-07-24 18:55:07.598754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.213 [2024-07-24 18:55:07.598757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.213 [2024-07-24 18:55:07.713602] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:30.213 [2024-07-24 18:55:07.713816] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:30.213 [2024-07-24 18:55:07.714163] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:30.213 [2024-07-24 18:55:07.714819] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:30.213 [2024-07-24 18:55:07.715057] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:30.776 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:30.776 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:12:30.776 18:55:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:32.195 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:32.195 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:32.195 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:32.195 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:32.195 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:32.195 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:32.454 Malloc1 00:12:32.454 18:55:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:32.713 18:55:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:32.970 18:55:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:33.228 18:55:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:33.228 18:55:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:33.228 18:55:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:33.486 Malloc2 00:12:33.486 18:55:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:33.743 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:34.001 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:34.259 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:34.259 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3129763 00:12:34.259 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3129763 ']' 00:12:34.259 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3129763 00:12:34.259 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:12:34.259 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:34.259 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3129763 00:12:34.259 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:34.259 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:34.259 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3129763' 00:12:34.259 killing process with pid 3129763 00:12:34.259 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3129763 00:12:34.259 18:55:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3129763 00:12:34.516 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:34.516 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:34.516 00:12:34.516 real 0m53.199s 00:12:34.516 user 3m29.573s 00:12:34.516 sys 0m4.568s 00:12:34.516 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:34.516 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:34.516 ************************************ 00:12:34.516 END TEST nvmf_vfio_user 00:12:34.516 ************************************ 00:12:34.517 18:55:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:34.517 18:55:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:34.517 18:55:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:34.517 18:55:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:34.775 ************************************ 00:12:34.775 START TEST nvmf_vfio_user_nvme_compliance 00:12:34.775 ************************************ 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:34.775 * Looking for test storage... 00:12:34.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.775 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.776 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:34.776 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:34.776 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:34.776 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:34.776 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:34.776 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:34.776 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:34.776 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:34.776 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3130370 00:12:34.776 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:34.776 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3130370' 00:12:34.776 Process pid: 3130370 00:12:34.776 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:34.776 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3130370 00:12:34.776 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 3130370 ']' 00:12:34.776 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.776 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:34.776 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.776 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:34.776 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:34.776 [2024-07-24 18:55:12.249568] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:12:34.776 [2024-07-24 18:55:12.249636] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.776 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.776 [2024-07-24 18:55:12.305697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:35.033 [2024-07-24 18:55:12.412410] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.033 [2024-07-24 18:55:12.412456] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.033 [2024-07-24 18:55:12.412480] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.033 [2024-07-24 18:55:12.412491] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.033 [2024-07-24 18:55:12.412501] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.033 [2024-07-24 18:55:12.412581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.033 [2024-07-24 18:55:12.412647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.033 [2024-07-24 18:55:12.412650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.033 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:35.033 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:12:35.033 18:55:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:35.966 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:35.967 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:35.967 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:35.967 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.967 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:35.967 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.967 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:35.967 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:35.967 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.967 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:36.225 malloc0 00:12:36.225 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.225 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:36.225 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.225 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:36.225 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.225 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:36.225 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.225 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:36.225 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.225 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:36.225 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.225 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:36.225 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.225 18:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:36.225 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.225 00:12:36.225 00:12:36.225 CUnit - A unit testing framework for C - Version 2.1-3 00:12:36.225 http://cunit.sourceforge.net/ 00:12:36.225 00:12:36.225 00:12:36.225 Suite: nvme_compliance 00:12:36.225 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-24 18:55:13.761635] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.225 [2024-07-24 18:55:13.763040] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:36.225 [2024-07-24 18:55:13.763063] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:36.225 [2024-07-24 18:55:13.763112] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:36.225 [2024-07-24 18:55:13.767678] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.225 passed 00:12:36.483 Test: admin_identify_ctrlr_verify_fused ...[2024-07-24 18:55:13.854277] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.483 [2024-07-24 18:55:13.857302] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.483 passed 00:12:36.483 Test: admin_identify_ns ...[2024-07-24 18:55:13.942592] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.483 [2024-07-24 18:55:14.001146] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:36.483 [2024-07-24 18:55:14.010122] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:36.483 [2024-07-24 18:55:14.031230] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.483 passed 00:12:36.740 Test: admin_get_features_mandatory_features ...[2024-07-24 18:55:14.115292] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.740 [2024-07-24 18:55:14.118311] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.740 passed 00:12:36.740 Test: admin_get_features_optional_features ...[2024-07-24 18:55:14.203880] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.740 [2024-07-24 18:55:14.206901] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.740 passed 00:12:36.740 Test: admin_set_features_number_of_queues ...[2024-07-24 18:55:14.290033] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.997 [2024-07-24 18:55:14.396221] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.997 passed 00:12:36.997 Test: admin_get_log_page_mandatory_logs ...[2024-07-24 18:55:14.478687] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.997 [2024-07-24 18:55:14.481711] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.997 passed 00:12:36.997 Test: admin_get_log_page_with_lpo ...[2024-07-24 18:55:14.561588] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:37.254 [2024-07-24 18:55:14.633134] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:37.254 [2024-07-24 18:55:14.646192] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:37.254 passed 00:12:37.254 Test: fabric_property_get ...[2024-07-24 18:55:14.728720] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:37.254 [2024-07-24 18:55:14.729989] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:37.254 [2024-07-24 18:55:14.731749] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:37.254 passed 00:12:37.254 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-24 18:55:14.815275] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:37.254 [2024-07-24 18:55:14.816590] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:37.254 [2024-07-24 18:55:14.818300] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:37.254 passed 00:12:37.511 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-24 18:55:14.900663] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:37.511 [2024-07-24 18:55:14.984112] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:37.511 [2024-07-24 18:55:15.000110] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:37.511 [2024-07-24 18:55:15.005220] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:37.511 passed 00:12:37.511 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-24 18:55:15.087415] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:37.511 [2024-07-24 18:55:15.088713] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:37.511 [2024-07-24 18:55:15.090430] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:37.768 passed 00:12:37.768 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-24 18:55:15.176945] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:37.768 [2024-07-24 18:55:15.252130] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:37.768 [2024-07-24 18:55:15.275113] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:37.768 [2024-07-24 18:55:15.280233] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:37.768 passed 00:12:37.768 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-24 18:55:15.363170] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:37.768 [2024-07-24 18:55:15.364510] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:37.768 [2024-07-24 18:55:15.364554] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:37.768 [2024-07-24 18:55:15.366194] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:38.026 passed 00:12:38.026 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-24 18:55:15.448606] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:38.026 [2024-07-24 18:55:15.540131] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:38.026 [2024-07-24 18:55:15.548109] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:38.026 [2024-07-24 18:55:15.556108] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:38.026 [2024-07-24 18:55:15.564111] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:38.026 [2024-07-24 18:55:15.593229] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:38.283 passed 00:12:38.283 Test: admin_create_io_sq_verify_pc ...[2024-07-24 18:55:15.676454] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:38.283 [2024-07-24 18:55:15.695123] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:38.283 [2024-07-24 18:55:15.712849] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:38.283 passed 00:12:38.283 Test: admin_create_io_qp_max_qps ...[2024-07-24 18:55:15.794426] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:39.656 [2024-07-24 18:55:16.888120] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:39.913 [2024-07-24 18:55:17.268463] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:39.913 passed 00:12:39.913 Test: admin_create_io_sq_shared_cq ...[2024-07-24 18:55:17.351703] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:39.913 [2024-07-24 18:55:17.483110] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:40.171 [2024-07-24 18:55:17.520191] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:40.171 passed 00:12:40.171 00:12:40.171 Run Summary: Type Total Ran Passed Failed Inactive 00:12:40.171 suites 1 1 n/a 0 0 00:12:40.171 tests 18 18 18 0 0 00:12:40.171 asserts 360 360 360 0 n/a 00:12:40.171 00:12:40.171 Elapsed time = 1.558 seconds 00:12:40.171 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3130370 00:12:40.171 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 3130370 ']' 00:12:40.171 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 3130370 00:12:40.171 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:12:40.172 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:40.172 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3130370 00:12:40.172 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:40.172 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:40.172 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3130370' 00:12:40.172 killing process with pid 3130370 00:12:40.172 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 3130370 00:12:40.172 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 3130370 00:12:40.429 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:40.429 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:40.429 00:12:40.429 real 0m5.774s 00:12:40.429 user 0m16.210s 00:12:40.429 sys 0m0.537s 00:12:40.429 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:40.429 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:40.429 ************************************ 00:12:40.429 END TEST nvmf_vfio_user_nvme_compliance 00:12:40.429 ************************************ 00:12:40.429 18:55:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:40.429 18:55:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:40.429 18:55:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:40.429 18:55:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:40.429 ************************************ 00:12:40.429 START TEST nvmf_vfio_user_fuzz 00:12:40.429 ************************************ 00:12:40.429 18:55:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:40.429 * Looking for test storage... 00:12:40.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:40.429 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:40.429 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:40.429 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.429 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.429 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.429 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.429 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.429 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.429 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.429 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.429 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.429 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:40.430 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:40.688 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:40.688 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:40.688 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:40.688 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:40.688 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:40.688 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:40.688 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:40.688 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3131093 00:12:40.688 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:40.688 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3131093' 00:12:40.688 Process pid: 3131093 00:12:40.688 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:40.688 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3131093 00:12:40.688 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3131093 ']' 00:12:40.688 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.688 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:40.688 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.688 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:40.688 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:40.946 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:40.946 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:12:40.946 18:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:41.880 malloc0 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:41.880 18:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:13.940 Fuzzing completed. Shutting down the fuzz application 00:13:13.940 00:13:13.940 Dumping successful admin opcodes: 00:13:13.940 8, 9, 10, 24, 00:13:13.940 Dumping successful io opcodes: 00:13:13.940 0, 00:13:13.940 NS: 0x200003a1ef00 I/O qp, Total commands completed: 606476, total successful commands: 2344, random_seed: 227667264 00:13:13.940 NS: 0x200003a1ef00 admin qp, Total commands completed: 89709, total successful commands: 719, random_seed: 3206646400 00:13:13.940 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:13.940 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.940 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:13.940 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.940 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3131093 00:13:13.940 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3131093 ']' 00:13:13.940 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 3131093 00:13:13.940 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:13:13.940 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:13.940 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3131093 00:13:13.940 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:13.940 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:13.940 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3131093' 00:13:13.940 killing process with pid 3131093 00:13:13.940 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 3131093 00:13:13.940 18:55:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 3131093 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:13.941 00:13:13.941 real 0m32.344s 00:13:13.941 user 0m31.968s 00:13:13.941 sys 0m28.634s 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:13.941 ************************************ 00:13:13.941 END TEST nvmf_vfio_user_fuzz 00:13:13.941 ************************************ 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:13.941 ************************************ 00:13:13.941 START TEST nvmf_auth_target 00:13:13.941 ************************************ 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:13.941 * Looking for test storage... 00:13:13.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:13.941 18:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:14.877 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:14.877 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:14.877 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:14.878 Found net devices under 0000:09:00.0: cvl_0_0 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:14.878 Found net devices under 0000:09:00.1: cvl_0_1 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:14.878 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.136 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.136 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.136 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:15.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:13:15.136 00:13:15.136 --- 10.0.0.2 ping statistics --- 00:13:15.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.136 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:13:15.136 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:13:15.136 00:13:15.136 --- 10.0.0.1 ping statistics --- 00:13:15.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.136 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:13:15.137 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.137 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:13:15.137 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:15.137 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.137 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:15.137 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:15.137 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.137 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:15.137 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:15.137 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:13:15.137 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:15.137 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:15.137 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.137 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3136533 00:13:15.137 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3136533 00:13:15.137 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:15.137 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3136533 ']' 00:13:15.137 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.137 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:15.137 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.137 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:15.137 18:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3136688 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=11e32b418aa79d8738c02d92802aa442d9564385a31e514f 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.TJh 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 11e32b418aa79d8738c02d92802aa442d9564385a31e514f 0 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 11e32b418aa79d8738c02d92802aa442d9564385a31e514f 0 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=11e32b418aa79d8738c02d92802aa442d9564385a31e514f 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.TJh 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.TJh 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.TJh 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:16.070 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=49ea9dfc72d5f3a2264ba5f263a581417c222ea3e975f061720105dd37c72150 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.sIA 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 49ea9dfc72d5f3a2264ba5f263a581417c222ea3e975f061720105dd37c72150 3 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 49ea9dfc72d5f3a2264ba5f263a581417c222ea3e975f061720105dd37c72150 3 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=49ea9dfc72d5f3a2264ba5f263a581417c222ea3e975f061720105dd37c72150 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.sIA 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.sIA 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.sIA 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b08bb4a0e522753b1b66b261a8921dc8 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.g0i 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b08bb4a0e522753b1b66b261a8921dc8 1 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b08bb4a0e522753b1b66b261a8921dc8 1 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b08bb4a0e522753b1b66b261a8921dc8 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:16.071 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.g0i 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.g0i 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.g0i 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=62e9612f381475ba8c1dcf09d23a132bd44c1f04d4633b6b 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.bbU 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 62e9612f381475ba8c1dcf09d23a132bd44c1f04d4633b6b 2 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 62e9612f381475ba8c1dcf09d23a132bd44c1f04d4633b6b 2 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=62e9612f381475ba8c1dcf09d23a132bd44c1f04d4633b6b 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.bbU 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.bbU 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.bbU 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=20b902eb703928cc1173dcc0144bfce8657e30a334023cc4 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4Sg 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 20b902eb703928cc1173dcc0144bfce8657e30a334023cc4 2 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 20b902eb703928cc1173dcc0144bfce8657e30a334023cc4 2 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=20b902eb703928cc1173dcc0144bfce8657e30a334023cc4 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4Sg 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4Sg 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.4Sg 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fcf9e5f09f5156261d0f42c56ea8ad80 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.fco 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fcf9e5f09f5156261d0f42c56ea8ad80 1 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fcf9e5f09f5156261d0f42c56ea8ad80 1 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fcf9e5f09f5156261d0f42c56ea8ad80 00:13:16.330 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.fco 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.fco 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.fco 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d1249ddee24fca0abe3a01fb3652e120cb56dee2f67fc61e27ac0ec236c204ad 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.GhU 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d1249ddee24fca0abe3a01fb3652e120cb56dee2f67fc61e27ac0ec236c204ad 3 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d1249ddee24fca0abe3a01fb3652e120cb56dee2f67fc61e27ac0ec236c204ad 3 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d1249ddee24fca0abe3a01fb3652e120cb56dee2f67fc61e27ac0ec236c204ad 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.GhU 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.GhU 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.GhU 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3136533 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3136533 ']' 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:16.331 18:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.589 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:16.589 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:16.589 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3136688 /var/tmp/host.sock 00:13:16.589 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3136688 ']' 00:13:16.589 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:13:16.589 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:16.589 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:16.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:16.589 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:16.589 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.847 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:16.847 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:16.847 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:13:16.847 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.847 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.847 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.847 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:16.847 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.TJh 00:13:16.847 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.847 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.847 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.847 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.TJh 00:13:16.847 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.TJh 00:13:17.104 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.sIA ]] 00:13:17.104 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sIA 00:13:17.104 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.104 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.104 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.104 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sIA 00:13:17.104 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sIA 00:13:17.361 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:17.361 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.g0i 00:13:17.361 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.361 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.362 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.362 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.g0i 00:13:17.362 18:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.g0i 00:13:17.624 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.bbU ]] 00:13:17.624 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bbU 00:13:17.624 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.624 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.624 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.624 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bbU 00:13:17.624 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bbU 00:13:17.882 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:17.882 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.4Sg 00:13:17.882 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.882 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.882 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.882 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.4Sg 00:13:17.882 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.4Sg 00:13:18.140 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.fco ]] 00:13:18.140 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fco 00:13:18.140 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.140 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.140 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.140 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fco 00:13:18.140 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fco 00:13:18.398 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:18.398 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.GhU 00:13:18.398 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.398 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.398 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.398 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.GhU 00:13:18.398 18:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.GhU 00:13:18.656 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:13:18.656 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:18.656 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:18.656 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:18.656 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:18.656 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:18.914 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:13:18.914 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:18.914 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:18.914 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:18.914 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:18.914 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.914 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.914 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.914 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.914 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.914 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.914 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.171 00:13:19.171 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:19.171 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:19.171 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.429 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.429 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.429 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.429 18:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.429 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.429 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:19.429 { 00:13:19.429 "cntlid": 1, 00:13:19.429 "qid": 0, 00:13:19.429 "state": "enabled", 00:13:19.429 "thread": "nvmf_tgt_poll_group_000", 00:13:19.429 "listen_address": { 00:13:19.429 "trtype": "TCP", 00:13:19.429 "adrfam": "IPv4", 00:13:19.429 "traddr": "10.0.0.2", 00:13:19.429 "trsvcid": "4420" 00:13:19.429 }, 00:13:19.429 "peer_address": { 00:13:19.429 "trtype": "TCP", 00:13:19.429 "adrfam": "IPv4", 00:13:19.429 "traddr": "10.0.0.1", 00:13:19.429 "trsvcid": "47200" 00:13:19.429 }, 00:13:19.429 "auth": { 00:13:19.429 "state": "completed", 00:13:19.429 "digest": "sha256", 00:13:19.429 "dhgroup": "null" 00:13:19.429 } 00:13:19.429 } 00:13:19.429 ]' 00:13:19.429 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:19.686 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:19.686 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:19.686 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:19.686 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:19.687 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.687 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.687 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.945 18:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTFlMzJiNDE4YWE3OWQ4NzM4YzAyZDkyODAyYWE0NDJkOTU2NDM4NWEzMWU1MTRmJe5BTw==: --dhchap-ctrl-secret DHHC-1:03:NDllYTlkZmM3MmQ1ZjNhMjI2NGJhNWYyNjNhNTgxNDE3YzIyMmVhM2U5NzVmMDYxNzIwMTA1ZGQzN2M3MjE1MNOdqyA=: 00:13:20.877 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.877 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:20.877 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.877 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.877 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.877 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:20.877 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:20.877 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:21.135 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:13:21.135 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:21.135 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:21.135 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:21.135 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:21.135 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.135 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.135 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.135 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.135 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.135 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.135 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.391 00:13:21.391 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:21.391 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.391 18:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:21.647 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.647 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.647 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.647 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.647 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.647 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:21.647 { 00:13:21.647 "cntlid": 3, 00:13:21.647 "qid": 0, 00:13:21.647 "state": "enabled", 00:13:21.647 "thread": "nvmf_tgt_poll_group_000", 00:13:21.647 "listen_address": { 00:13:21.647 "trtype": "TCP", 00:13:21.647 "adrfam": "IPv4", 00:13:21.647 "traddr": "10.0.0.2", 00:13:21.647 "trsvcid": "4420" 00:13:21.647 }, 00:13:21.647 "peer_address": { 00:13:21.647 "trtype": "TCP", 00:13:21.647 "adrfam": "IPv4", 00:13:21.647 "traddr": "10.0.0.1", 00:13:21.647 "trsvcid": "47228" 00:13:21.647 }, 00:13:21.647 "auth": { 00:13:21.647 "state": "completed", 00:13:21.647 "digest": "sha256", 00:13:21.647 "dhgroup": "null" 00:13:21.647 } 00:13:21.647 } 00:13:21.647 ]' 00:13:21.647 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:21.647 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:21.647 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:21.905 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:21.905 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:21.905 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.905 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.905 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.163 18:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YjA4YmI0YTBlNTIyNzUzYjFiNjZiMjYxYTg5MjFkYzhLPinR: --dhchap-ctrl-secret DHHC-1:02:NjJlOTYxMmYzODE0NzViYThjMWRjZjA5ZDIzYTEzMmJkNDRjMWYwNGQ0NjMzYjZiLvkyDg==: 00:13:23.114 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.114 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:23.114 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.114 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.114 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.114 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:23.114 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:23.114 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:23.371 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:13:23.371 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:23.371 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:23.371 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:23.371 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:23.371 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.371 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.371 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.371 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.371 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.371 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.371 18:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.628 00:13:23.628 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:23.628 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:23.628 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.885 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.885 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.885 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.885 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.885 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.885 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:23.885 { 00:13:23.885 "cntlid": 5, 00:13:23.885 "qid": 0, 00:13:23.885 "state": "enabled", 00:13:23.885 "thread": "nvmf_tgt_poll_group_000", 00:13:23.885 "listen_address": { 00:13:23.885 "trtype": "TCP", 00:13:23.885 "adrfam": "IPv4", 00:13:23.885 "traddr": "10.0.0.2", 00:13:23.885 "trsvcid": "4420" 00:13:23.885 }, 00:13:23.885 "peer_address": { 00:13:23.885 "trtype": "TCP", 00:13:23.885 "adrfam": "IPv4", 00:13:23.886 "traddr": "10.0.0.1", 00:13:23.886 "trsvcid": "47244" 00:13:23.886 }, 00:13:23.886 "auth": { 00:13:23.886 "state": "completed", 00:13:23.886 "digest": "sha256", 00:13:23.886 "dhgroup": "null" 00:13:23.886 } 00:13:23.886 } 00:13:23.886 ]' 00:13:23.886 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:23.886 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:23.886 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:23.886 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:23.886 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:23.886 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.886 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.886 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.144 18:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjBiOTAyZWI3MDM5MjhjYzExNzNkY2MwMTQ0YmZjZTg2NTdlMzBhMzM0MDIzY2M0x+cEuw==: --dhchap-ctrl-secret DHHC-1:01:ZmNmOWU1ZjA5ZjUxNTYyNjFkMGY0MmM1NmVhOGFkODDesNza: 00:13:25.075 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.333 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:25.333 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.333 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.333 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.333 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:25.333 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:25.333 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:25.590 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:13:25.590 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:25.590 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:25.590 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:25.590 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:25.590 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.590 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:13:25.590 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.590 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.590 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.590 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:25.590 18:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:25.847 00:13:25.847 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:25.847 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.847 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:26.105 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.105 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.105 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.105 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.105 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.105 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:26.105 { 00:13:26.105 "cntlid": 7, 00:13:26.105 "qid": 0, 00:13:26.105 "state": "enabled", 00:13:26.105 "thread": "nvmf_tgt_poll_group_000", 00:13:26.105 "listen_address": { 00:13:26.105 "trtype": "TCP", 00:13:26.105 "adrfam": "IPv4", 00:13:26.105 "traddr": "10.0.0.2", 00:13:26.105 "trsvcid": "4420" 00:13:26.105 }, 00:13:26.105 "peer_address": { 00:13:26.105 "trtype": "TCP", 00:13:26.105 "adrfam": "IPv4", 00:13:26.105 "traddr": "10.0.0.1", 00:13:26.105 "trsvcid": "47254" 00:13:26.105 }, 00:13:26.105 "auth": { 00:13:26.105 "state": "completed", 00:13:26.105 "digest": "sha256", 00:13:26.105 "dhgroup": "null" 00:13:26.105 } 00:13:26.105 } 00:13:26.105 ]' 00:13:26.105 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:26.105 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:26.105 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:26.105 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:26.105 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:26.105 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.105 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.105 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.363 18:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDEyNDlkZGVlMjRmY2EwYWJlM2EwMWZiMzY1MmUxMjBjYjU2ZGVlMmY2N2ZjNjFlMjdhYzBlYzIzNmMyMDRhZCsG6tQ=: 00:13:27.295 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.295 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:27.295 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.295 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.295 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.295 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:27.295 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:27.295 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:27.295 18:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:27.554 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:13:27.554 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:27.554 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:27.554 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:27.554 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:27.554 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.554 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.554 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.554 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.554 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.554 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.554 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.119 00:13:28.119 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:28.119 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:28.119 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.119 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.119 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.119 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.119 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.119 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.119 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:28.119 { 00:13:28.119 "cntlid": 9, 00:13:28.119 "qid": 0, 00:13:28.119 "state": "enabled", 00:13:28.119 "thread": "nvmf_tgt_poll_group_000", 00:13:28.119 "listen_address": { 00:13:28.119 "trtype": "TCP", 00:13:28.119 "adrfam": "IPv4", 00:13:28.119 "traddr": "10.0.0.2", 00:13:28.119 "trsvcid": "4420" 00:13:28.119 }, 00:13:28.119 "peer_address": { 00:13:28.119 "trtype": "TCP", 00:13:28.119 "adrfam": "IPv4", 00:13:28.119 "traddr": "10.0.0.1", 00:13:28.119 "trsvcid": "47280" 00:13:28.119 }, 00:13:28.119 "auth": { 00:13:28.119 "state": "completed", 00:13:28.119 "digest": "sha256", 00:13:28.119 "dhgroup": "ffdhe2048" 00:13:28.119 } 00:13:28.119 } 00:13:28.119 ]' 00:13:28.119 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:28.378 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:28.378 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:28.378 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:28.378 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:28.378 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.378 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.378 18:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.636 18:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTFlMzJiNDE4YWE3OWQ4NzM4YzAyZDkyODAyYWE0NDJkOTU2NDM4NWEzMWU1MTRmJe5BTw==: --dhchap-ctrl-secret DHHC-1:03:NDllYTlkZmM3MmQ1ZjNhMjI2NGJhNWYyNjNhNTgxNDE3YzIyMmVhM2U5NzVmMDYxNzIwMTA1ZGQzN2M3MjE1MNOdqyA=: 00:13:29.566 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.566 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:29.566 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.566 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.566 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.566 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:29.566 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:29.566 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:29.822 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:13:29.822 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:29.822 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:29.822 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:29.822 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:29.822 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.823 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.823 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.823 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.823 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.823 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.823 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.080 00:13:30.080 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:30.080 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:30.080 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.338 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.339 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.339 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.339 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.339 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.339 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:30.339 { 00:13:30.339 "cntlid": 11, 00:13:30.339 "qid": 0, 00:13:30.339 "state": "enabled", 00:13:30.339 "thread": "nvmf_tgt_poll_group_000", 00:13:30.339 "listen_address": { 00:13:30.339 "trtype": "TCP", 00:13:30.339 "adrfam": "IPv4", 00:13:30.339 "traddr": "10.0.0.2", 00:13:30.339 "trsvcid": "4420" 00:13:30.339 }, 00:13:30.339 "peer_address": { 00:13:30.339 "trtype": "TCP", 00:13:30.339 "adrfam": "IPv4", 00:13:30.339 "traddr": "10.0.0.1", 00:13:30.339 "trsvcid": "44262" 00:13:30.339 }, 00:13:30.339 "auth": { 00:13:30.339 "state": "completed", 00:13:30.339 "digest": "sha256", 00:13:30.339 "dhgroup": "ffdhe2048" 00:13:30.339 } 00:13:30.339 } 00:13:30.339 ]' 00:13:30.339 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:30.339 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:30.339 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:30.596 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:30.596 18:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:30.596 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.596 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.596 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.853 18:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YjA4YmI0YTBlNTIyNzUzYjFiNjZiMjYxYTg5MjFkYzhLPinR: --dhchap-ctrl-secret DHHC-1:02:NjJlOTYxMmYzODE0NzViYThjMWRjZjA5ZDIzYTEzMmJkNDRjMWYwNGQ0NjMzYjZiLvkyDg==: 00:13:31.786 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.786 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:31.786 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.786 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.786 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.786 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:31.786 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:31.786 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:32.043 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:13:32.044 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:32.044 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:32.044 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:32.044 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:32.044 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.044 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.044 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.044 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.044 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.044 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.044 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.301 00:13:32.301 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:32.301 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.301 18:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:32.559 18:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.559 18:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.559 18:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.559 18:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.559 18:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.559 18:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:32.559 { 00:13:32.559 "cntlid": 13, 00:13:32.559 "qid": 0, 00:13:32.559 "state": "enabled", 00:13:32.559 "thread": "nvmf_tgt_poll_group_000", 00:13:32.559 "listen_address": { 00:13:32.559 "trtype": "TCP", 00:13:32.559 "adrfam": "IPv4", 00:13:32.559 "traddr": "10.0.0.2", 00:13:32.559 "trsvcid": "4420" 00:13:32.559 }, 00:13:32.559 "peer_address": { 00:13:32.559 "trtype": "TCP", 00:13:32.559 "adrfam": "IPv4", 00:13:32.559 "traddr": "10.0.0.1", 00:13:32.559 "trsvcid": "44298" 00:13:32.559 }, 00:13:32.559 "auth": { 00:13:32.559 "state": "completed", 00:13:32.559 "digest": "sha256", 00:13:32.559 "dhgroup": "ffdhe2048" 00:13:32.559 } 00:13:32.559 } 00:13:32.559 ]' 00:13:32.559 18:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:32.559 18:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:32.559 18:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:32.817 18:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:32.817 18:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:32.817 18:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.817 18:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.817 18:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.075 18:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjBiOTAyZWI3MDM5MjhjYzExNzNkY2MwMTQ0YmZjZTg2NTdlMzBhMzM0MDIzY2M0x+cEuw==: --dhchap-ctrl-secret DHHC-1:01:ZmNmOWU1ZjA5ZjUxNTYyNjFkMGY0MmM1NmVhOGFkODDesNza: 00:13:34.008 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.008 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:34.008 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.008 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.008 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.008 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:34.008 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:34.008 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:34.265 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:13:34.265 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:34.265 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:34.265 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:34.266 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:34.266 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.266 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:13:34.266 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.266 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.266 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.266 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:34.266 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:34.523 00:13:34.523 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:34.523 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:34.523 18:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.781 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.781 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.781 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.781 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.781 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.781 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:34.781 { 00:13:34.781 "cntlid": 15, 00:13:34.781 "qid": 0, 00:13:34.781 "state": "enabled", 00:13:34.781 "thread": "nvmf_tgt_poll_group_000", 00:13:34.781 "listen_address": { 00:13:34.781 "trtype": "TCP", 00:13:34.781 "adrfam": "IPv4", 00:13:34.781 "traddr": "10.0.0.2", 00:13:34.781 "trsvcid": "4420" 00:13:34.781 }, 00:13:34.781 "peer_address": { 00:13:34.781 "trtype": "TCP", 00:13:34.781 "adrfam": "IPv4", 00:13:34.781 "traddr": "10.0.0.1", 00:13:34.781 "trsvcid": "44330" 00:13:34.782 }, 00:13:34.782 "auth": { 00:13:34.782 "state": "completed", 00:13:34.782 "digest": "sha256", 00:13:34.782 "dhgroup": "ffdhe2048" 00:13:34.782 } 00:13:34.782 } 00:13:34.782 ]' 00:13:34.782 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:34.782 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:34.782 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:34.782 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:34.782 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:34.782 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.782 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.782 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.040 18:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDEyNDlkZGVlMjRmY2EwYWJlM2EwMWZiMzY1MmUxMjBjYjU2ZGVlMmY2N2ZjNjFlMjdhYzBlYzIzNmMyMDRhZCsG6tQ=: 00:13:35.971 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.972 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:35.972 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.972 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.972 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.972 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:35.972 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:35.972 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:35.972 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:36.266 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:13:36.266 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:36.266 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:36.266 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:36.266 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:36.266 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.267 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.267 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.267 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.267 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.267 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.267 18:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.833 00:13:36.833 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:36.833 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:36.833 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.833 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.833 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.833 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.833 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.833 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.833 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:36.833 { 00:13:36.833 "cntlid": 17, 00:13:36.833 "qid": 0, 00:13:36.833 "state": "enabled", 00:13:36.833 "thread": "nvmf_tgt_poll_group_000", 00:13:36.833 "listen_address": { 00:13:36.833 "trtype": "TCP", 00:13:36.833 "adrfam": "IPv4", 00:13:36.833 "traddr": "10.0.0.2", 00:13:36.833 "trsvcid": "4420" 00:13:36.833 }, 00:13:36.833 "peer_address": { 00:13:36.833 "trtype": "TCP", 00:13:36.833 "adrfam": "IPv4", 00:13:36.833 "traddr": "10.0.0.1", 00:13:36.833 "trsvcid": "44362" 00:13:36.833 }, 00:13:36.833 "auth": { 00:13:36.833 "state": "completed", 00:13:36.833 "digest": "sha256", 00:13:36.833 "dhgroup": "ffdhe3072" 00:13:36.833 } 00:13:36.833 } 00:13:36.833 ]' 00:13:36.833 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:37.090 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:37.090 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:37.090 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:37.090 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:37.090 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.090 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.091 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.348 18:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTFlMzJiNDE4YWE3OWQ4NzM4YzAyZDkyODAyYWE0NDJkOTU2NDM4NWEzMWU1MTRmJe5BTw==: --dhchap-ctrl-secret DHHC-1:03:NDllYTlkZmM3MmQ1ZjNhMjI2NGJhNWYyNjNhNTgxNDE3YzIyMmVhM2U5NzVmMDYxNzIwMTA1ZGQzN2M3MjE1MNOdqyA=: 00:13:38.281 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.281 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:38.281 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.281 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.281 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.281 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:38.281 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:38.281 18:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:38.539 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:13:38.539 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:38.539 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:38.539 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:38.539 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:38.539 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.539 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.539 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.539 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.539 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.539 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.539 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.796 00:13:38.796 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:38.796 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:38.796 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.053 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.053 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.053 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.053 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.053 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.054 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:39.054 { 00:13:39.054 "cntlid": 19, 00:13:39.054 "qid": 0, 00:13:39.054 "state": "enabled", 00:13:39.054 "thread": "nvmf_tgt_poll_group_000", 00:13:39.054 "listen_address": { 00:13:39.054 "trtype": "TCP", 00:13:39.054 "adrfam": "IPv4", 00:13:39.054 "traddr": "10.0.0.2", 00:13:39.054 "trsvcid": "4420" 00:13:39.054 }, 00:13:39.054 "peer_address": { 00:13:39.054 "trtype": "TCP", 00:13:39.054 "adrfam": "IPv4", 00:13:39.054 "traddr": "10.0.0.1", 00:13:39.054 "trsvcid": "34018" 00:13:39.054 }, 00:13:39.054 "auth": { 00:13:39.054 "state": "completed", 00:13:39.054 "digest": "sha256", 00:13:39.054 "dhgroup": "ffdhe3072" 00:13:39.054 } 00:13:39.054 } 00:13:39.054 ]' 00:13:39.311 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:39.311 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:39.311 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:39.311 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:39.311 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:39.311 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.311 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.311 18:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.569 18:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YjA4YmI0YTBlNTIyNzUzYjFiNjZiMjYxYTg5MjFkYzhLPinR: --dhchap-ctrl-secret DHHC-1:02:NjJlOTYxMmYzODE0NzViYThjMWRjZjA5ZDIzYTEzMmJkNDRjMWYwNGQ0NjMzYjZiLvkyDg==: 00:13:40.502 18:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.502 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:40.502 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.502 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.502 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.502 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:40.502 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:40.502 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:40.759 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:13:40.759 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:40.759 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:40.759 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:40.759 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:40.759 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.759 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.759 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.759 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.759 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.759 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.759 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.017 00:13:41.274 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:41.274 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:41.274 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.530 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.530 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.530 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.530 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.530 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.530 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:41.530 { 00:13:41.530 "cntlid": 21, 00:13:41.530 "qid": 0, 00:13:41.530 "state": "enabled", 00:13:41.530 "thread": "nvmf_tgt_poll_group_000", 00:13:41.530 "listen_address": { 00:13:41.530 "trtype": "TCP", 00:13:41.530 "adrfam": "IPv4", 00:13:41.530 "traddr": "10.0.0.2", 00:13:41.530 "trsvcid": "4420" 00:13:41.530 }, 00:13:41.530 "peer_address": { 00:13:41.531 "trtype": "TCP", 00:13:41.531 "adrfam": "IPv4", 00:13:41.531 "traddr": "10.0.0.1", 00:13:41.531 "trsvcid": "34042" 00:13:41.531 }, 00:13:41.531 "auth": { 00:13:41.531 "state": "completed", 00:13:41.531 "digest": "sha256", 00:13:41.531 "dhgroup": "ffdhe3072" 00:13:41.531 } 00:13:41.531 } 00:13:41.531 ]' 00:13:41.531 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:41.531 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:41.531 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:41.531 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:41.531 18:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:41.531 18:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.531 18:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.531 18:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.788 18:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjBiOTAyZWI3MDM5MjhjYzExNzNkY2MwMTQ0YmZjZTg2NTdlMzBhMzM0MDIzY2M0x+cEuw==: --dhchap-ctrl-secret DHHC-1:01:ZmNmOWU1ZjA5ZjUxNTYyNjFkMGY0MmM1NmVhOGFkODDesNza: 00:13:42.719 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.719 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:42.719 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.719 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.719 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.719 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:42.719 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:42.719 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:42.976 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:13:42.976 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:42.976 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:42.976 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:42.976 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:42.976 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.976 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:13:42.976 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.976 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.976 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.976 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:42.976 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:43.234 00:13:43.234 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:43.234 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:43.234 18:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.491 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.749 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.749 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.749 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.749 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.749 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:43.749 { 00:13:43.749 "cntlid": 23, 00:13:43.749 "qid": 0, 00:13:43.749 "state": "enabled", 00:13:43.749 "thread": "nvmf_tgt_poll_group_000", 00:13:43.749 "listen_address": { 00:13:43.749 "trtype": "TCP", 00:13:43.749 "adrfam": "IPv4", 00:13:43.749 "traddr": "10.0.0.2", 00:13:43.749 "trsvcid": "4420" 00:13:43.749 }, 00:13:43.749 "peer_address": { 00:13:43.749 "trtype": "TCP", 00:13:43.749 "adrfam": "IPv4", 00:13:43.749 "traddr": "10.0.0.1", 00:13:43.749 "trsvcid": "34064" 00:13:43.749 }, 00:13:43.749 "auth": { 00:13:43.749 "state": "completed", 00:13:43.749 "digest": "sha256", 00:13:43.749 "dhgroup": "ffdhe3072" 00:13:43.749 } 00:13:43.749 } 00:13:43.749 ]' 00:13:43.749 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:43.749 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:43.749 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:43.749 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:43.749 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:43.749 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.749 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.749 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.006 18:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDEyNDlkZGVlMjRmY2EwYWJlM2EwMWZiMzY1MmUxMjBjYjU2ZGVlMmY2N2ZjNjFlMjdhYzBlYzIzNmMyMDRhZCsG6tQ=: 00:13:44.939 18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.939 18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:44.939 18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.939 18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.939 18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.939 18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:44.939 18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:44.939 18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:44.939 18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:45.196 18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:13:45.196 18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:45.196 18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:45.196 18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:45.196 18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:45.196 18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.197 18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.197 18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.197 18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.197 18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.197 18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.197 18:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.762 00:13:45.762 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:45.762 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:45.762 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.762 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.762 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.762 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.762 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.019 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.019 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:46.019 { 00:13:46.019 "cntlid": 25, 00:13:46.019 "qid": 0, 00:13:46.019 "state": "enabled", 00:13:46.019 "thread": "nvmf_tgt_poll_group_000", 00:13:46.019 "listen_address": { 00:13:46.019 "trtype": "TCP", 00:13:46.019 "adrfam": "IPv4", 00:13:46.019 "traddr": "10.0.0.2", 00:13:46.019 "trsvcid": "4420" 00:13:46.019 }, 00:13:46.019 "peer_address": { 00:13:46.019 "trtype": "TCP", 00:13:46.019 "adrfam": "IPv4", 00:13:46.019 "traddr": "10.0.0.1", 00:13:46.019 "trsvcid": "34090" 00:13:46.019 }, 00:13:46.019 "auth": { 00:13:46.019 "state": "completed", 00:13:46.019 "digest": "sha256", 00:13:46.019 "dhgroup": "ffdhe4096" 00:13:46.019 } 00:13:46.019 } 00:13:46.019 ]' 00:13:46.019 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:46.019 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:46.019 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:46.019 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:46.019 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:46.019 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.019 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.019 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.275 18:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTFlMzJiNDE4YWE3OWQ4NzM4YzAyZDkyODAyYWE0NDJkOTU2NDM4NWEzMWU1MTRmJe5BTw==: --dhchap-ctrl-secret DHHC-1:03:NDllYTlkZmM3MmQ1ZjNhMjI2NGJhNWYyNjNhNTgxNDE3YzIyMmVhM2U5NzVmMDYxNzIwMTA1ZGQzN2M3MjE1MNOdqyA=: 00:13:47.206 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.206 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:47.206 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.206 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.206 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.206 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:47.206 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:47.206 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:47.464 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:13:47.464 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:47.464 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:47.464 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:47.464 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:47.464 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.464 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.464 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.464 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.464 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.464 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.465 18:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.029 00:13:48.029 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:48.029 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:48.029 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.320 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.320 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.320 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.320 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.320 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.320 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:48.320 { 00:13:48.320 "cntlid": 27, 00:13:48.320 "qid": 0, 00:13:48.320 "state": "enabled", 00:13:48.320 "thread": "nvmf_tgt_poll_group_000", 00:13:48.320 "listen_address": { 00:13:48.320 "trtype": "TCP", 00:13:48.320 "adrfam": "IPv4", 00:13:48.320 "traddr": "10.0.0.2", 00:13:48.320 "trsvcid": "4420" 00:13:48.320 }, 00:13:48.320 "peer_address": { 00:13:48.320 "trtype": "TCP", 00:13:48.320 "adrfam": "IPv4", 00:13:48.320 "traddr": "10.0.0.1", 00:13:48.320 "trsvcid": "34110" 00:13:48.320 }, 00:13:48.320 "auth": { 00:13:48.320 "state": "completed", 00:13:48.320 "digest": "sha256", 00:13:48.320 "dhgroup": "ffdhe4096" 00:13:48.320 } 00:13:48.320 } 00:13:48.320 ]' 00:13:48.320 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:48.320 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:48.321 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:48.321 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:48.321 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:48.321 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.321 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.321 18:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.578 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YjA4YmI0YTBlNTIyNzUzYjFiNjZiMjYxYTg5MjFkYzhLPinR: --dhchap-ctrl-secret DHHC-1:02:NjJlOTYxMmYzODE0NzViYThjMWRjZjA5ZDIzYTEzMmJkNDRjMWYwNGQ0NjMzYjZiLvkyDg==: 00:13:49.510 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.510 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:49.510 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.510 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.510 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.510 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:49.510 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:49.510 18:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:49.768 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:13:49.768 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:49.768 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:49.768 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:49.768 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:49.768 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.768 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.768 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.768 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.768 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.768 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.768 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.333 00:13:50.333 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:50.333 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:50.333 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.333 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.333 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.333 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.333 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.333 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.333 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:50.333 { 00:13:50.333 "cntlid": 29, 00:13:50.333 "qid": 0, 00:13:50.333 "state": "enabled", 00:13:50.333 "thread": "nvmf_tgt_poll_group_000", 00:13:50.333 "listen_address": { 00:13:50.333 "trtype": "TCP", 00:13:50.333 "adrfam": "IPv4", 00:13:50.333 "traddr": "10.0.0.2", 00:13:50.333 "trsvcid": "4420" 00:13:50.333 }, 00:13:50.333 "peer_address": { 00:13:50.333 "trtype": "TCP", 00:13:50.333 "adrfam": "IPv4", 00:13:50.333 "traddr": "10.0.0.1", 00:13:50.333 "trsvcid": "40854" 00:13:50.333 }, 00:13:50.333 "auth": { 00:13:50.333 "state": "completed", 00:13:50.333 "digest": "sha256", 00:13:50.333 "dhgroup": "ffdhe4096" 00:13:50.333 } 00:13:50.333 } 00:13:50.333 ]' 00:13:50.333 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:50.591 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:50.591 18:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:50.591 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:50.591 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:50.591 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.591 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.591 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.848 18:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjBiOTAyZWI3MDM5MjhjYzExNzNkY2MwMTQ0YmZjZTg2NTdlMzBhMzM0MDIzY2M0x+cEuw==: --dhchap-ctrl-secret DHHC-1:01:ZmNmOWU1ZjA5ZjUxNTYyNjFkMGY0MmM1NmVhOGFkODDesNza: 00:13:51.780 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.780 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:51.780 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.780 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.780 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.780 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:51.780 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:51.780 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:52.038 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:13:52.038 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:52.038 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:52.038 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:52.038 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:52.038 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.038 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:13:52.038 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.038 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.038 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.038 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:52.038 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:52.603 00:13:52.603 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:52.603 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:52.603 18:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.861 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.861 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.861 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.861 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.861 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.861 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:52.861 { 00:13:52.861 "cntlid": 31, 00:13:52.861 "qid": 0, 00:13:52.861 "state": "enabled", 00:13:52.861 "thread": "nvmf_tgt_poll_group_000", 00:13:52.861 "listen_address": { 00:13:52.861 "trtype": "TCP", 00:13:52.861 "adrfam": "IPv4", 00:13:52.861 "traddr": "10.0.0.2", 00:13:52.861 "trsvcid": "4420" 00:13:52.861 }, 00:13:52.861 "peer_address": { 00:13:52.861 "trtype": "TCP", 00:13:52.861 "adrfam": "IPv4", 00:13:52.861 "traddr": "10.0.0.1", 00:13:52.861 "trsvcid": "40882" 00:13:52.861 }, 00:13:52.861 "auth": { 00:13:52.861 "state": "completed", 00:13:52.861 "digest": "sha256", 00:13:52.861 "dhgroup": "ffdhe4096" 00:13:52.861 } 00:13:52.861 } 00:13:52.861 ]' 00:13:52.861 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:52.861 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:52.861 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:52.861 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:52.861 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:52.861 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.861 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.861 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.118 18:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDEyNDlkZGVlMjRmY2EwYWJlM2EwMWZiMzY1MmUxMjBjYjU2ZGVlMmY2N2ZjNjFlMjdhYzBlYzIzNmMyMDRhZCsG6tQ=: 00:13:54.051 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.051 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:54.051 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.051 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.051 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.051 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:54.051 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:54.051 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:54.051 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:54.308 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:13:54.308 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:54.308 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:54.308 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:54.308 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:54.308 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.308 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.308 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.308 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.308 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.308 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.308 18:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.893 00:13:54.893 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:54.893 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:54.893 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.167 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.167 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.167 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.167 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.167 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.167 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:55.167 { 00:13:55.167 "cntlid": 33, 00:13:55.167 "qid": 0, 00:13:55.167 "state": "enabled", 00:13:55.167 "thread": "nvmf_tgt_poll_group_000", 00:13:55.167 "listen_address": { 00:13:55.167 "trtype": "TCP", 00:13:55.167 "adrfam": "IPv4", 00:13:55.167 "traddr": "10.0.0.2", 00:13:55.167 "trsvcid": "4420" 00:13:55.167 }, 00:13:55.167 "peer_address": { 00:13:55.167 "trtype": "TCP", 00:13:55.167 "adrfam": "IPv4", 00:13:55.167 "traddr": "10.0.0.1", 00:13:55.167 "trsvcid": "40914" 00:13:55.167 }, 00:13:55.167 "auth": { 00:13:55.167 "state": "completed", 00:13:55.167 "digest": "sha256", 00:13:55.167 "dhgroup": "ffdhe6144" 00:13:55.167 } 00:13:55.167 } 00:13:55.167 ]' 00:13:55.167 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:55.167 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:55.167 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:55.167 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:55.167 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:55.167 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.167 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.167 18:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.734 18:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTFlMzJiNDE4YWE3OWQ4NzM4YzAyZDkyODAyYWE0NDJkOTU2NDM4NWEzMWU1MTRmJe5BTw==: --dhchap-ctrl-secret DHHC-1:03:NDllYTlkZmM3MmQ1ZjNhMjI2NGJhNWYyNjNhNTgxNDE3YzIyMmVhM2U5NzVmMDYxNzIwMTA1ZGQzN2M3MjE1MNOdqyA=: 00:13:56.704 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.704 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:56.704 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.704 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.704 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.704 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:56.704 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:56.704 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:56.704 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:13:56.704 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:56.704 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:56.704 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:56.704 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:56.704 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.704 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.704 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.704 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.704 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.704 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.704 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.270 00:13:57.270 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:57.270 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:57.270 18:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.528 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.528 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.528 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.528 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.528 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.528 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:57.528 { 00:13:57.528 "cntlid": 35, 00:13:57.528 "qid": 0, 00:13:57.528 "state": "enabled", 00:13:57.528 "thread": "nvmf_tgt_poll_group_000", 00:13:57.528 "listen_address": { 00:13:57.528 "trtype": "TCP", 00:13:57.528 "adrfam": "IPv4", 00:13:57.528 "traddr": "10.0.0.2", 00:13:57.528 "trsvcid": "4420" 00:13:57.528 }, 00:13:57.528 "peer_address": { 00:13:57.528 "trtype": "TCP", 00:13:57.528 "adrfam": "IPv4", 00:13:57.528 "traddr": "10.0.0.1", 00:13:57.528 "trsvcid": "40936" 00:13:57.528 }, 00:13:57.528 "auth": { 00:13:57.528 "state": "completed", 00:13:57.528 "digest": "sha256", 00:13:57.528 "dhgroup": "ffdhe6144" 00:13:57.528 } 00:13:57.528 } 00:13:57.528 ]' 00:13:57.528 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:57.785 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:57.785 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:57.785 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:57.785 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:57.785 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.785 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.785 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.043 18:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YjA4YmI0YTBlNTIyNzUzYjFiNjZiMjYxYTg5MjFkYzhLPinR: --dhchap-ctrl-secret DHHC-1:02:NjJlOTYxMmYzODE0NzViYThjMWRjZjA5ZDIzYTEzMmJkNDRjMWYwNGQ0NjMzYjZiLvkyDg==: 00:13:58.974 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.974 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:58.974 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.974 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.974 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.974 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:58.974 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:58.974 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:59.232 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:13:59.232 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:59.232 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:59.232 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:59.232 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:59.232 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.232 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.232 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.232 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.232 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.232 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.232 18:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.797 00:13:59.797 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:59.797 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:59.797 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.054 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.054 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.054 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.054 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.054 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.054 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:00.054 { 00:14:00.054 "cntlid": 37, 00:14:00.054 "qid": 0, 00:14:00.054 "state": "enabled", 00:14:00.054 "thread": "nvmf_tgt_poll_group_000", 00:14:00.054 "listen_address": { 00:14:00.054 "trtype": "TCP", 00:14:00.054 "adrfam": "IPv4", 00:14:00.054 "traddr": "10.0.0.2", 00:14:00.054 "trsvcid": "4420" 00:14:00.054 }, 00:14:00.054 "peer_address": { 00:14:00.054 "trtype": "TCP", 00:14:00.054 "adrfam": "IPv4", 00:14:00.054 "traddr": "10.0.0.1", 00:14:00.054 "trsvcid": "59872" 00:14:00.054 }, 00:14:00.054 "auth": { 00:14:00.054 "state": "completed", 00:14:00.054 "digest": "sha256", 00:14:00.054 "dhgroup": "ffdhe6144" 00:14:00.054 } 00:14:00.054 } 00:14:00.054 ]' 00:14:00.054 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:00.054 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:00.054 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:00.054 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:00.054 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:00.311 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.311 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.311 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.569 18:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjBiOTAyZWI3MDM5MjhjYzExNzNkY2MwMTQ0YmZjZTg2NTdlMzBhMzM0MDIzY2M0x+cEuw==: --dhchap-ctrl-secret DHHC-1:01:ZmNmOWU1ZjA5ZjUxNTYyNjFkMGY0MmM1NmVhOGFkODDesNza: 00:14:01.500 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.500 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:01.500 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.500 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.500 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.500 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:01.500 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:01.501 18:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:01.758 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:01.758 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:01.758 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:01.758 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:01.758 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:01.758 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.758 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:01.758 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.758 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.758 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.758 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:01.758 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:02.322 00:14:02.322 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:02.322 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:02.322 18:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.580 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.580 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.580 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.580 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.580 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.580 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:02.580 { 00:14:02.580 "cntlid": 39, 00:14:02.580 "qid": 0, 00:14:02.580 "state": "enabled", 00:14:02.580 "thread": "nvmf_tgt_poll_group_000", 00:14:02.580 "listen_address": { 00:14:02.580 "trtype": "TCP", 00:14:02.580 "adrfam": "IPv4", 00:14:02.580 "traddr": "10.0.0.2", 00:14:02.580 "trsvcid": "4420" 00:14:02.580 }, 00:14:02.580 "peer_address": { 00:14:02.581 "trtype": "TCP", 00:14:02.581 "adrfam": "IPv4", 00:14:02.581 "traddr": "10.0.0.1", 00:14:02.581 "trsvcid": "59888" 00:14:02.581 }, 00:14:02.581 "auth": { 00:14:02.581 "state": "completed", 00:14:02.581 "digest": "sha256", 00:14:02.581 "dhgroup": "ffdhe6144" 00:14:02.581 } 00:14:02.581 } 00:14:02.581 ]' 00:14:02.581 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:02.581 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:02.581 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:02.581 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:02.581 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:02.838 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.838 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.838 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.095 18:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDEyNDlkZGVlMjRmY2EwYWJlM2EwMWZiMzY1MmUxMjBjYjU2ZGVlMmY2N2ZjNjFlMjdhYzBlYzIzNmMyMDRhZCsG6tQ=: 00:14:04.027 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.027 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:04.027 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.027 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.027 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.027 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:04.027 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:04.027 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:04.027 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:04.285 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:04.285 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:04.285 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:04.285 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:04.285 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:04.285 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.285 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.285 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.285 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.285 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.285 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.285 18:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.217 00:14:05.217 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:05.217 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:05.218 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.475 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.475 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.475 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.475 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.475 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.475 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:05.475 { 00:14:05.475 "cntlid": 41, 00:14:05.475 "qid": 0, 00:14:05.475 "state": "enabled", 00:14:05.475 "thread": "nvmf_tgt_poll_group_000", 00:14:05.475 "listen_address": { 00:14:05.475 "trtype": "TCP", 00:14:05.475 "adrfam": "IPv4", 00:14:05.475 "traddr": "10.0.0.2", 00:14:05.475 "trsvcid": "4420" 00:14:05.475 }, 00:14:05.475 "peer_address": { 00:14:05.475 "trtype": "TCP", 00:14:05.475 "adrfam": "IPv4", 00:14:05.475 "traddr": "10.0.0.1", 00:14:05.475 "trsvcid": "59906" 00:14:05.475 }, 00:14:05.475 "auth": { 00:14:05.475 "state": "completed", 00:14:05.475 "digest": "sha256", 00:14:05.475 "dhgroup": "ffdhe8192" 00:14:05.475 } 00:14:05.475 } 00:14:05.475 ]' 00:14:05.475 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:05.475 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:05.475 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:05.475 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:05.475 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:05.475 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.475 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.475 18:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.732 18:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTFlMzJiNDE4YWE3OWQ4NzM4YzAyZDkyODAyYWE0NDJkOTU2NDM4NWEzMWU1MTRmJe5BTw==: --dhchap-ctrl-secret DHHC-1:03:NDllYTlkZmM3MmQ1ZjNhMjI2NGJhNWYyNjNhNTgxNDE3YzIyMmVhM2U5NzVmMDYxNzIwMTA1ZGQzN2M3MjE1MNOdqyA=: 00:14:06.662 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.662 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:06.662 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.662 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.662 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.662 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:06.662 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:06.662 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:06.919 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:06.919 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:06.919 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:06.919 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:06.919 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:06.919 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.919 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.920 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.920 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.920 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.920 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.920 18:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.851 00:14:07.851 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:07.851 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:07.851 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.109 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.109 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.109 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.109 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.109 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.109 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:08.109 { 00:14:08.109 "cntlid": 43, 00:14:08.109 "qid": 0, 00:14:08.109 "state": "enabled", 00:14:08.109 "thread": "nvmf_tgt_poll_group_000", 00:14:08.109 "listen_address": { 00:14:08.109 "trtype": "TCP", 00:14:08.109 "adrfam": "IPv4", 00:14:08.109 "traddr": "10.0.0.2", 00:14:08.109 "trsvcid": "4420" 00:14:08.109 }, 00:14:08.109 "peer_address": { 00:14:08.109 "trtype": "TCP", 00:14:08.109 "adrfam": "IPv4", 00:14:08.109 "traddr": "10.0.0.1", 00:14:08.109 "trsvcid": "59932" 00:14:08.109 }, 00:14:08.109 "auth": { 00:14:08.109 "state": "completed", 00:14:08.109 "digest": "sha256", 00:14:08.109 "dhgroup": "ffdhe8192" 00:14:08.109 } 00:14:08.109 } 00:14:08.109 ]' 00:14:08.109 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:08.109 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:08.109 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:08.109 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:08.109 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:08.367 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.367 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.367 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.624 18:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YjA4YmI0YTBlNTIyNzUzYjFiNjZiMjYxYTg5MjFkYzhLPinR: --dhchap-ctrl-secret DHHC-1:02:NjJlOTYxMmYzODE0NzViYThjMWRjZjA5ZDIzYTEzMmJkNDRjMWYwNGQ0NjMzYjZiLvkyDg==: 00:14:09.557 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.557 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:09.557 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.557 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.557 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.557 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:09.557 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:09.557 18:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:09.815 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:14:09.815 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:09.815 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:09.815 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:09.815 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:09.815 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.815 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.815 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.815 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.815 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.815 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.815 18:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.748 00:14:10.748 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:10.748 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:10.748 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.748 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.748 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.748 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.748 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.748 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.748 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:10.748 { 00:14:10.748 "cntlid": 45, 00:14:10.748 "qid": 0, 00:14:10.748 "state": "enabled", 00:14:10.748 "thread": "nvmf_tgt_poll_group_000", 00:14:10.748 "listen_address": { 00:14:10.748 "trtype": "TCP", 00:14:10.748 "adrfam": "IPv4", 00:14:10.748 "traddr": "10.0.0.2", 00:14:10.748 "trsvcid": "4420" 00:14:10.748 }, 00:14:10.748 "peer_address": { 00:14:10.748 "trtype": "TCP", 00:14:10.748 "adrfam": "IPv4", 00:14:10.748 "traddr": "10.0.0.1", 00:14:10.748 "trsvcid": "58830" 00:14:10.748 }, 00:14:10.748 "auth": { 00:14:10.748 "state": "completed", 00:14:10.748 "digest": "sha256", 00:14:10.748 "dhgroup": "ffdhe8192" 00:14:10.748 } 00:14:10.748 } 00:14:10.748 ]' 00:14:10.748 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.005 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:11.005 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.005 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:11.005 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.005 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.005 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.005 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.262 18:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjBiOTAyZWI3MDM5MjhjYzExNzNkY2MwMTQ0YmZjZTg2NTdlMzBhMzM0MDIzY2M0x+cEuw==: --dhchap-ctrl-secret DHHC-1:01:ZmNmOWU1ZjA5ZjUxNTYyNjFkMGY0MmM1NmVhOGFkODDesNza: 00:14:12.195 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.195 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:12.195 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.195 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.195 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.195 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:12.195 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:12.195 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:12.452 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:14:12.453 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:12.453 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:12.453 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:12.453 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:12.453 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.453 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:12.453 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.453 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.453 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.453 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:12.453 18:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:13.385 00:14:13.385 18:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.385 18:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:13.385 18:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.643 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.643 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.643 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.643 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.643 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.643 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:13.643 { 00:14:13.643 "cntlid": 47, 00:14:13.643 "qid": 0, 00:14:13.643 "state": "enabled", 00:14:13.643 "thread": "nvmf_tgt_poll_group_000", 00:14:13.643 "listen_address": { 00:14:13.643 "trtype": "TCP", 00:14:13.643 "adrfam": "IPv4", 00:14:13.643 "traddr": "10.0.0.2", 00:14:13.643 "trsvcid": "4420" 00:14:13.643 }, 00:14:13.643 "peer_address": { 00:14:13.643 "trtype": "TCP", 00:14:13.643 "adrfam": "IPv4", 00:14:13.643 "traddr": "10.0.0.1", 00:14:13.643 "trsvcid": "58858" 00:14:13.643 }, 00:14:13.643 "auth": { 00:14:13.643 "state": "completed", 00:14:13.643 "digest": "sha256", 00:14:13.643 "dhgroup": "ffdhe8192" 00:14:13.643 } 00:14:13.643 } 00:14:13.643 ]' 00:14:13.643 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:13.643 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:13.643 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:13.643 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:13.643 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:13.643 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.643 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.643 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.900 18:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDEyNDlkZGVlMjRmY2EwYWJlM2EwMWZiMzY1MmUxMjBjYjU2ZGVlMmY2N2ZjNjFlMjdhYzBlYzIzNmMyMDRhZCsG6tQ=: 00:14:14.872 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.872 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:14.872 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.872 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.872 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.872 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:14.872 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:14.872 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:14.872 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:14.872 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:15.130 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:14:15.130 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:15.130 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:15.130 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:15.130 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:15.130 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.130 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.130 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.130 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.130 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.130 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.130 18:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.694 00:14:15.694 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:15.694 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.694 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:15.694 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.694 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.694 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.694 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.694 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.694 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:15.694 { 00:14:15.694 "cntlid": 49, 00:14:15.694 "qid": 0, 00:14:15.694 "state": "enabled", 00:14:15.694 "thread": "nvmf_tgt_poll_group_000", 00:14:15.694 "listen_address": { 00:14:15.694 "trtype": "TCP", 00:14:15.694 "adrfam": "IPv4", 00:14:15.694 "traddr": "10.0.0.2", 00:14:15.694 "trsvcid": "4420" 00:14:15.694 }, 00:14:15.694 "peer_address": { 00:14:15.694 "trtype": "TCP", 00:14:15.694 "adrfam": "IPv4", 00:14:15.694 "traddr": "10.0.0.1", 00:14:15.694 "trsvcid": "58894" 00:14:15.694 }, 00:14:15.694 "auth": { 00:14:15.694 "state": "completed", 00:14:15.694 "digest": "sha384", 00:14:15.694 "dhgroup": "null" 00:14:15.694 } 00:14:15.694 } 00:14:15.694 ]' 00:14:15.694 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:15.951 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:15.951 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:15.951 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:15.951 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:15.951 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.951 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.951 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.208 18:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTFlMzJiNDE4YWE3OWQ4NzM4YzAyZDkyODAyYWE0NDJkOTU2NDM4NWEzMWU1MTRmJe5BTw==: --dhchap-ctrl-secret DHHC-1:03:NDllYTlkZmM3MmQ1ZjNhMjI2NGJhNWYyNjNhNTgxNDE3YzIyMmVhM2U5NzVmMDYxNzIwMTA1ZGQzN2M3MjE1MNOdqyA=: 00:14:17.139 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.140 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:17.140 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.140 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.140 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.140 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:17.140 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:17.140 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:17.397 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:14:17.397 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:17.397 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:17.397 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:17.397 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:17.397 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.397 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.397 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.397 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.397 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.397 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.397 18:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.654 00:14:17.654 18:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:17.654 18:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:17.654 18:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.912 18:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.912 18:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.912 18:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.912 18:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.912 18:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.912 18:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:17.912 { 00:14:17.912 "cntlid": 51, 00:14:17.912 "qid": 0, 00:14:17.912 "state": "enabled", 00:14:17.912 "thread": "nvmf_tgt_poll_group_000", 00:14:17.912 "listen_address": { 00:14:17.912 "trtype": "TCP", 00:14:17.912 "adrfam": "IPv4", 00:14:17.912 "traddr": "10.0.0.2", 00:14:17.912 "trsvcid": "4420" 00:14:17.912 }, 00:14:17.912 "peer_address": { 00:14:17.912 "trtype": "TCP", 00:14:17.912 "adrfam": "IPv4", 00:14:17.912 "traddr": "10.0.0.1", 00:14:17.912 "trsvcid": "58922" 00:14:17.912 }, 00:14:17.912 "auth": { 00:14:17.912 "state": "completed", 00:14:17.912 "digest": "sha384", 00:14:17.912 "dhgroup": "null" 00:14:17.912 } 00:14:17.912 } 00:14:17.912 ]' 00:14:17.912 18:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:17.912 18:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:17.912 18:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:17.912 18:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:17.912 18:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:18.170 18:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.170 18:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.170 18:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.427 18:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YjA4YmI0YTBlNTIyNzUzYjFiNjZiMjYxYTg5MjFkYzhLPinR: --dhchap-ctrl-secret DHHC-1:02:NjJlOTYxMmYzODE0NzViYThjMWRjZjA5ZDIzYTEzMmJkNDRjMWYwNGQ0NjMzYjZiLvkyDg==: 00:14:19.358 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.358 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:19.358 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.358 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.358 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.358 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.358 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:19.358 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:19.615 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:14:19.615 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.615 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:19.615 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:19.615 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:19.615 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.615 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.615 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.615 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.615 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.615 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.615 18:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.873 00:14:19.873 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:19.873 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.873 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.131 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.131 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.131 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.131 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.131 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.131 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:20.131 { 00:14:20.131 "cntlid": 53, 00:14:20.131 "qid": 0, 00:14:20.132 "state": "enabled", 00:14:20.132 "thread": "nvmf_tgt_poll_group_000", 00:14:20.132 "listen_address": { 00:14:20.132 "trtype": "TCP", 00:14:20.132 "adrfam": "IPv4", 00:14:20.132 "traddr": "10.0.0.2", 00:14:20.132 "trsvcid": "4420" 00:14:20.132 }, 00:14:20.132 "peer_address": { 00:14:20.132 "trtype": "TCP", 00:14:20.132 "adrfam": "IPv4", 00:14:20.132 "traddr": "10.0.0.1", 00:14:20.132 "trsvcid": "40688" 00:14:20.132 }, 00:14:20.132 "auth": { 00:14:20.132 "state": "completed", 00:14:20.132 "digest": "sha384", 00:14:20.132 "dhgroup": "null" 00:14:20.132 } 00:14:20.132 } 00:14:20.132 ]' 00:14:20.132 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:20.132 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:20.132 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:20.132 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:20.132 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.132 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.132 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.132 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.389 18:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjBiOTAyZWI3MDM5MjhjYzExNzNkY2MwMTQ0YmZjZTg2NTdlMzBhMzM0MDIzY2M0x+cEuw==: --dhchap-ctrl-secret DHHC-1:01:ZmNmOWU1ZjA5ZjUxNTYyNjFkMGY0MmM1NmVhOGFkODDesNza: 00:14:21.321 18:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.321 18:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:21.321 18:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.321 18:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.321 18:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.321 18:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.321 18:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:21.321 18:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:21.885 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:14:21.885 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:21.885 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:21.885 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:21.885 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:21.885 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.885 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:21.885 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.885 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.885 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.885 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:21.885 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:22.142 00:14:22.142 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.142 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.142 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.400 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.400 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.400 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.400 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.400 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.400 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:22.400 { 00:14:22.400 "cntlid": 55, 00:14:22.400 "qid": 0, 00:14:22.400 "state": "enabled", 00:14:22.400 "thread": "nvmf_tgt_poll_group_000", 00:14:22.400 "listen_address": { 00:14:22.400 "trtype": "TCP", 00:14:22.400 "adrfam": "IPv4", 00:14:22.400 "traddr": "10.0.0.2", 00:14:22.400 "trsvcid": "4420" 00:14:22.400 }, 00:14:22.400 "peer_address": { 00:14:22.400 "trtype": "TCP", 00:14:22.400 "adrfam": "IPv4", 00:14:22.400 "traddr": "10.0.0.1", 00:14:22.400 "trsvcid": "40704" 00:14:22.400 }, 00:14:22.400 "auth": { 00:14:22.400 "state": "completed", 00:14:22.400 "digest": "sha384", 00:14:22.400 "dhgroup": "null" 00:14:22.400 } 00:14:22.400 } 00:14:22.400 ]' 00:14:22.400 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:22.400 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:22.400 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:22.400 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:22.400 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:22.400 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.400 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.400 18:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.657 18:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDEyNDlkZGVlMjRmY2EwYWJlM2EwMWZiMzY1MmUxMjBjYjU2ZGVlMmY2N2ZjNjFlMjdhYzBlYzIzNmMyMDRhZCsG6tQ=: 00:14:24.030 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.030 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:24.030 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.030 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.030 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.030 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:24.030 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:24.030 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:24.030 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:24.030 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:14:24.030 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:24.030 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:24.030 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:24.030 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:24.030 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.030 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.030 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.030 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.030 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.030 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.030 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.288 00:14:24.288 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:24.288 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:24.288 18:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.546 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.546 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.546 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.546 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.546 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.546 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.546 { 00:14:24.546 "cntlid": 57, 00:14:24.546 "qid": 0, 00:14:24.546 "state": "enabled", 00:14:24.546 "thread": "nvmf_tgt_poll_group_000", 00:14:24.546 "listen_address": { 00:14:24.546 "trtype": "TCP", 00:14:24.546 "adrfam": "IPv4", 00:14:24.546 "traddr": "10.0.0.2", 00:14:24.546 "trsvcid": "4420" 00:14:24.546 }, 00:14:24.546 "peer_address": { 00:14:24.546 "trtype": "TCP", 00:14:24.546 "adrfam": "IPv4", 00:14:24.546 "traddr": "10.0.0.1", 00:14:24.546 "trsvcid": "40730" 00:14:24.546 }, 00:14:24.546 "auth": { 00:14:24.546 "state": "completed", 00:14:24.546 "digest": "sha384", 00:14:24.546 "dhgroup": "ffdhe2048" 00:14:24.546 } 00:14:24.546 } 00:14:24.546 ]' 00:14:24.546 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:24.546 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:24.546 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:24.546 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:24.546 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:24.804 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.804 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.804 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.061 18:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTFlMzJiNDE4YWE3OWQ4NzM4YzAyZDkyODAyYWE0NDJkOTU2NDM4NWEzMWU1MTRmJe5BTw==: --dhchap-ctrl-secret DHHC-1:03:NDllYTlkZmM3MmQ1ZjNhMjI2NGJhNWYyNjNhNTgxNDE3YzIyMmVhM2U5NzVmMDYxNzIwMTA1ZGQzN2M3MjE1MNOdqyA=: 00:14:25.992 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.992 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:25.992 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.992 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.992 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.992 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.992 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:25.992 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:26.249 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:14:26.249 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.249 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:26.249 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:26.249 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:26.249 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.249 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.249 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.249 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.249 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.249 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.249 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.507 00:14:26.507 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.507 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.507 18:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.764 18:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.764 18:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.764 18:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.764 18:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.764 18:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.764 18:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.764 { 00:14:26.764 "cntlid": 59, 00:14:26.764 "qid": 0, 00:14:26.764 "state": "enabled", 00:14:26.764 "thread": "nvmf_tgt_poll_group_000", 00:14:26.764 "listen_address": { 00:14:26.764 "trtype": "TCP", 00:14:26.764 "adrfam": "IPv4", 00:14:26.764 "traddr": "10.0.0.2", 00:14:26.764 "trsvcid": "4420" 00:14:26.764 }, 00:14:26.764 "peer_address": { 00:14:26.764 "trtype": "TCP", 00:14:26.764 "adrfam": "IPv4", 00:14:26.764 "traddr": "10.0.0.1", 00:14:26.764 "trsvcid": "40764" 00:14:26.764 }, 00:14:26.764 "auth": { 00:14:26.764 "state": "completed", 00:14:26.764 "digest": "sha384", 00:14:26.764 "dhgroup": "ffdhe2048" 00:14:26.764 } 00:14:26.764 } 00:14:26.764 ]' 00:14:26.764 18:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.764 18:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:26.764 18:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.764 18:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:26.764 18:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.764 18:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.764 18:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.764 18:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.329 18:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YjA4YmI0YTBlNTIyNzUzYjFiNjZiMjYxYTg5MjFkYzhLPinR: --dhchap-ctrl-secret DHHC-1:02:NjJlOTYxMmYzODE0NzViYThjMWRjZjA5ZDIzYTEzMmJkNDRjMWYwNGQ0NjMzYjZiLvkyDg==: 00:14:28.262 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.262 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:28.262 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.262 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.262 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.262 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:28.262 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:28.262 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:28.518 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:14:28.518 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.518 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:28.518 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:28.518 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:28.518 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.518 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.518 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.518 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.518 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.518 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.518 18:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.774 00:14:28.774 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.774 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.774 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.031 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.031 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.031 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.031 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.031 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.031 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:29.031 { 00:14:29.031 "cntlid": 61, 00:14:29.031 "qid": 0, 00:14:29.031 "state": "enabled", 00:14:29.031 "thread": "nvmf_tgt_poll_group_000", 00:14:29.031 "listen_address": { 00:14:29.031 "trtype": "TCP", 00:14:29.031 "adrfam": "IPv4", 00:14:29.031 "traddr": "10.0.0.2", 00:14:29.031 "trsvcid": "4420" 00:14:29.031 }, 00:14:29.031 "peer_address": { 00:14:29.031 "trtype": "TCP", 00:14:29.031 "adrfam": "IPv4", 00:14:29.031 "traddr": "10.0.0.1", 00:14:29.031 "trsvcid": "36432" 00:14:29.031 }, 00:14:29.031 "auth": { 00:14:29.031 "state": "completed", 00:14:29.031 "digest": "sha384", 00:14:29.031 "dhgroup": "ffdhe2048" 00:14:29.031 } 00:14:29.031 } 00:14:29.031 ]' 00:14:29.031 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:29.031 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:29.031 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:29.031 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:29.031 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:29.031 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.031 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.031 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.288 18:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjBiOTAyZWI3MDM5MjhjYzExNzNkY2MwMTQ0YmZjZTg2NTdlMzBhMzM0MDIzY2M0x+cEuw==: --dhchap-ctrl-secret DHHC-1:01:ZmNmOWU1ZjA5ZjUxNTYyNjFkMGY0MmM1NmVhOGFkODDesNza: 00:14:30.659 18:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.659 18:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:30.659 18:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.659 18:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.659 18:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.659 18:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:30.659 18:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:30.659 18:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:30.659 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:14:30.659 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.659 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:30.659 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:30.659 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:30.659 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.659 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:30.659 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.659 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.659 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.659 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:30.659 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:30.917 00:14:30.917 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.917 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.917 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:31.174 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.174 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.174 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.174 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.174 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.174 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:31.174 { 00:14:31.174 "cntlid": 63, 00:14:31.174 "qid": 0, 00:14:31.174 "state": "enabled", 00:14:31.174 "thread": "nvmf_tgt_poll_group_000", 00:14:31.174 "listen_address": { 00:14:31.174 "trtype": "TCP", 00:14:31.174 "adrfam": "IPv4", 00:14:31.174 "traddr": "10.0.0.2", 00:14:31.174 "trsvcid": "4420" 00:14:31.174 }, 00:14:31.174 "peer_address": { 00:14:31.174 "trtype": "TCP", 00:14:31.174 "adrfam": "IPv4", 00:14:31.174 "traddr": "10.0.0.1", 00:14:31.174 "trsvcid": "36448" 00:14:31.174 }, 00:14:31.174 "auth": { 00:14:31.174 "state": "completed", 00:14:31.174 "digest": "sha384", 00:14:31.174 "dhgroup": "ffdhe2048" 00:14:31.174 } 00:14:31.174 } 00:14:31.174 ]' 00:14:31.174 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:31.174 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:31.174 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:31.431 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:31.431 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:31.431 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.431 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.431 18:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.689 18:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDEyNDlkZGVlMjRmY2EwYWJlM2EwMWZiMzY1MmUxMjBjYjU2ZGVlMmY2N2ZjNjFlMjdhYzBlYzIzNmMyMDRhZCsG6tQ=: 00:14:32.620 18:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.620 18:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:32.620 18:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.620 18:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.620 18:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.620 18:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:32.620 18:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:32.621 18:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:32.621 18:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:32.877 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:14:32.877 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:32.877 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:32.877 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:32.877 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:32.877 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.877 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.877 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.877 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.877 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.877 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.877 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.136 00:14:33.136 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:33.136 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.136 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:33.418 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.418 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.418 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.418 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.418 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.418 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:33.418 { 00:14:33.418 "cntlid": 65, 00:14:33.418 "qid": 0, 00:14:33.418 "state": "enabled", 00:14:33.418 "thread": "nvmf_tgt_poll_group_000", 00:14:33.418 "listen_address": { 00:14:33.418 "trtype": "TCP", 00:14:33.418 "adrfam": "IPv4", 00:14:33.418 "traddr": "10.0.0.2", 00:14:33.418 "trsvcid": "4420" 00:14:33.418 }, 00:14:33.418 "peer_address": { 00:14:33.418 "trtype": "TCP", 00:14:33.418 "adrfam": "IPv4", 00:14:33.418 "traddr": "10.0.0.1", 00:14:33.418 "trsvcid": "36482" 00:14:33.418 }, 00:14:33.418 "auth": { 00:14:33.418 "state": "completed", 00:14:33.418 "digest": "sha384", 00:14:33.418 "dhgroup": "ffdhe3072" 00:14:33.418 } 00:14:33.418 } 00:14:33.418 ]' 00:14:33.418 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:33.418 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:33.418 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:33.418 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:33.418 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:33.418 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.418 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.418 18:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.678 18:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTFlMzJiNDE4YWE3OWQ4NzM4YzAyZDkyODAyYWE0NDJkOTU2NDM4NWEzMWU1MTRmJe5BTw==: --dhchap-ctrl-secret DHHC-1:03:NDllYTlkZmM3MmQ1ZjNhMjI2NGJhNWYyNjNhNTgxNDE3YzIyMmVhM2U5NzVmMDYxNzIwMTA1ZGQzN2M3MjE1MNOdqyA=: 00:14:34.608 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.608 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:34.608 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.608 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.608 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.608 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:34.608 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:34.608 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:34.864 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:14:34.864 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.864 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:34.864 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:34.864 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:34.864 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.864 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.864 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.864 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.864 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.864 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.864 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:35.428 00:14:35.428 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:35.428 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.428 18:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:35.686 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.686 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.686 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.686 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.686 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.686 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:35.686 { 00:14:35.686 "cntlid": 67, 00:14:35.686 "qid": 0, 00:14:35.686 "state": "enabled", 00:14:35.686 "thread": "nvmf_tgt_poll_group_000", 00:14:35.686 "listen_address": { 00:14:35.686 "trtype": "TCP", 00:14:35.686 "adrfam": "IPv4", 00:14:35.686 "traddr": "10.0.0.2", 00:14:35.686 "trsvcid": "4420" 00:14:35.686 }, 00:14:35.686 "peer_address": { 00:14:35.686 "trtype": "TCP", 00:14:35.686 "adrfam": "IPv4", 00:14:35.686 "traddr": "10.0.0.1", 00:14:35.686 "trsvcid": "36512" 00:14:35.686 }, 00:14:35.686 "auth": { 00:14:35.686 "state": "completed", 00:14:35.686 "digest": "sha384", 00:14:35.686 "dhgroup": "ffdhe3072" 00:14:35.686 } 00:14:35.686 } 00:14:35.686 ]' 00:14:35.686 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:35.686 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:35.686 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.686 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:35.686 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.686 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.686 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.686 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.942 18:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YjA4YmI0YTBlNTIyNzUzYjFiNjZiMjYxYTg5MjFkYzhLPinR: --dhchap-ctrl-secret DHHC-1:02:NjJlOTYxMmYzODE0NzViYThjMWRjZjA5ZDIzYTEzMmJkNDRjMWYwNGQ0NjMzYjZiLvkyDg==: 00:14:36.890 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.891 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:36.891 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.891 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.891 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.891 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.891 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:36.891 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:37.147 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:14:37.147 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:37.147 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:37.147 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:37.147 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:37.147 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.147 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:37.147 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.147 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.147 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.147 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:37.148 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:37.404 00:14:37.404 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:37.404 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:37.405 18:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.662 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.662 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.662 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.662 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.662 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.662 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.662 { 00:14:37.662 "cntlid": 69, 00:14:37.662 "qid": 0, 00:14:37.662 "state": "enabled", 00:14:37.662 "thread": "nvmf_tgt_poll_group_000", 00:14:37.662 "listen_address": { 00:14:37.662 "trtype": "TCP", 00:14:37.662 "adrfam": "IPv4", 00:14:37.662 "traddr": "10.0.0.2", 00:14:37.662 "trsvcid": "4420" 00:14:37.662 }, 00:14:37.662 "peer_address": { 00:14:37.662 "trtype": "TCP", 00:14:37.662 "adrfam": "IPv4", 00:14:37.662 "traddr": "10.0.0.1", 00:14:37.662 "trsvcid": "36548" 00:14:37.662 }, 00:14:37.662 "auth": { 00:14:37.662 "state": "completed", 00:14:37.662 "digest": "sha384", 00:14:37.662 "dhgroup": "ffdhe3072" 00:14:37.662 } 00:14:37.662 } 00:14:37.662 ]' 00:14:37.662 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.662 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:37.662 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.662 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:37.662 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.919 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.919 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.919 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.176 18:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjBiOTAyZWI3MDM5MjhjYzExNzNkY2MwMTQ0YmZjZTg2NTdlMzBhMzM0MDIzY2M0x+cEuw==: --dhchap-ctrl-secret DHHC-1:01:ZmNmOWU1ZjA5ZjUxNTYyNjFkMGY0MmM1NmVhOGFkODDesNza: 00:14:39.105 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.105 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:39.105 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.105 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.105 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.105 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:39.105 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:39.105 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:39.363 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:14:39.363 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:39.363 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:39.363 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:39.363 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:39.363 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.363 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:39.363 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.363 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.363 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.363 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:39.363 18:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:39.622 00:14:39.622 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.622 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.622 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.880 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.880 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.880 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.880 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.880 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.880 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.880 { 00:14:39.880 "cntlid": 71, 00:14:39.880 "qid": 0, 00:14:39.880 "state": "enabled", 00:14:39.880 "thread": "nvmf_tgt_poll_group_000", 00:14:39.880 "listen_address": { 00:14:39.880 "trtype": "TCP", 00:14:39.880 "adrfam": "IPv4", 00:14:39.880 "traddr": "10.0.0.2", 00:14:39.880 "trsvcid": "4420" 00:14:39.880 }, 00:14:39.880 "peer_address": { 00:14:39.880 "trtype": "TCP", 00:14:39.880 "adrfam": "IPv4", 00:14:39.880 "traddr": "10.0.0.1", 00:14:39.880 "trsvcid": "43330" 00:14:39.880 }, 00:14:39.880 "auth": { 00:14:39.880 "state": "completed", 00:14:39.880 "digest": "sha384", 00:14:39.880 "dhgroup": "ffdhe3072" 00:14:39.880 } 00:14:39.880 } 00:14:39.880 ]' 00:14:39.880 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.880 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:39.880 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.880 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:39.880 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.880 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.880 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.880 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.137 18:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDEyNDlkZGVlMjRmY2EwYWJlM2EwMWZiMzY1MmUxMjBjYjU2ZGVlMmY2N2ZjNjFlMjdhYzBlYzIzNmMyMDRhZCsG6tQ=: 00:14:41.069 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.069 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:41.069 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.069 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.069 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.069 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:41.069 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:41.069 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:41.069 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:41.327 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:14:41.327 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.327 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:41.327 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:41.327 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:41.327 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.327 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.327 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.327 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.327 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.327 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.327 18:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.899 00:14:41.899 18:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:41.899 18:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:41.899 18:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.158 18:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.158 18:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.158 18:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.158 18:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.158 18:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.158 18:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.158 { 00:14:42.158 "cntlid": 73, 00:14:42.158 "qid": 0, 00:14:42.158 "state": "enabled", 00:14:42.158 "thread": "nvmf_tgt_poll_group_000", 00:14:42.158 "listen_address": { 00:14:42.158 "trtype": "TCP", 00:14:42.158 "adrfam": "IPv4", 00:14:42.158 "traddr": "10.0.0.2", 00:14:42.158 "trsvcid": "4420" 00:14:42.158 }, 00:14:42.158 "peer_address": { 00:14:42.158 "trtype": "TCP", 00:14:42.158 "adrfam": "IPv4", 00:14:42.158 "traddr": "10.0.0.1", 00:14:42.158 "trsvcid": "43358" 00:14:42.158 }, 00:14:42.158 "auth": { 00:14:42.158 "state": "completed", 00:14:42.158 "digest": "sha384", 00:14:42.158 "dhgroup": "ffdhe4096" 00:14:42.158 } 00:14:42.158 } 00:14:42.158 ]' 00:14:42.158 18:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.158 18:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:42.158 18:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.158 18:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:42.158 18:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.158 18:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.158 18:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.158 18:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.417 18:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTFlMzJiNDE4YWE3OWQ4NzM4YzAyZDkyODAyYWE0NDJkOTU2NDM4NWEzMWU1MTRmJe5BTw==: --dhchap-ctrl-secret DHHC-1:03:NDllYTlkZmM3MmQ1ZjNhMjI2NGJhNWYyNjNhNTgxNDE3YzIyMmVhM2U5NzVmMDYxNzIwMTA1ZGQzN2M3MjE1MNOdqyA=: 00:14:43.349 18:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.349 18:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:43.349 18:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.349 18:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.349 18:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.349 18:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.349 18:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:43.349 18:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:43.607 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:14:43.607 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.607 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:43.607 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:43.607 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:43.607 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.607 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.607 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.607 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.607 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.607 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.607 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.173 00:14:44.173 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:44.173 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.173 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.431 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.431 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.431 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.431 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.431 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.431 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.431 { 00:14:44.431 "cntlid": 75, 00:14:44.431 "qid": 0, 00:14:44.431 "state": "enabled", 00:14:44.431 "thread": "nvmf_tgt_poll_group_000", 00:14:44.431 "listen_address": { 00:14:44.431 "trtype": "TCP", 00:14:44.431 "adrfam": "IPv4", 00:14:44.431 "traddr": "10.0.0.2", 00:14:44.431 "trsvcid": "4420" 00:14:44.431 }, 00:14:44.431 "peer_address": { 00:14:44.431 "trtype": "TCP", 00:14:44.431 "adrfam": "IPv4", 00:14:44.431 "traddr": "10.0.0.1", 00:14:44.431 "trsvcid": "43394" 00:14:44.431 }, 00:14:44.431 "auth": { 00:14:44.431 "state": "completed", 00:14:44.431 "digest": "sha384", 00:14:44.431 "dhgroup": "ffdhe4096" 00:14:44.431 } 00:14:44.431 } 00:14:44.431 ]' 00:14:44.431 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.431 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:44.431 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:44.431 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:44.431 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:44.431 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.431 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.431 18:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.688 18:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YjA4YmI0YTBlNTIyNzUzYjFiNjZiMjYxYTg5MjFkYzhLPinR: --dhchap-ctrl-secret DHHC-1:02:NjJlOTYxMmYzODE0NzViYThjMWRjZjA5ZDIzYTEzMmJkNDRjMWYwNGQ0NjMzYjZiLvkyDg==: 00:14:45.621 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.621 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:45.621 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.621 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.621 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.621 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:45.621 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:45.621 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:45.878 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:14:45.878 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:45.878 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:45.878 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:45.878 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:45.878 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.878 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.878 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.878 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.878 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.878 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.878 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.443 00:14:46.443 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:46.443 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:46.443 18:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.701 18:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.701 18:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.701 18:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.701 18:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.701 18:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.701 18:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:46.701 { 00:14:46.701 "cntlid": 77, 00:14:46.701 "qid": 0, 00:14:46.701 "state": "enabled", 00:14:46.701 "thread": "nvmf_tgt_poll_group_000", 00:14:46.701 "listen_address": { 00:14:46.701 "trtype": "TCP", 00:14:46.701 "adrfam": "IPv4", 00:14:46.701 "traddr": "10.0.0.2", 00:14:46.701 "trsvcid": "4420" 00:14:46.701 }, 00:14:46.701 "peer_address": { 00:14:46.701 "trtype": "TCP", 00:14:46.701 "adrfam": "IPv4", 00:14:46.701 "traddr": "10.0.0.1", 00:14:46.701 "trsvcid": "43416" 00:14:46.701 }, 00:14:46.701 "auth": { 00:14:46.701 "state": "completed", 00:14:46.701 "digest": "sha384", 00:14:46.701 "dhgroup": "ffdhe4096" 00:14:46.701 } 00:14:46.701 } 00:14:46.701 ]' 00:14:46.701 18:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:46.701 18:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:46.701 18:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:46.701 18:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:46.701 18:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:46.701 18:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.701 18:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.701 18:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.958 18:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjBiOTAyZWI3MDM5MjhjYzExNzNkY2MwMTQ0YmZjZTg2NTdlMzBhMzM0MDIzY2M0x+cEuw==: --dhchap-ctrl-secret DHHC-1:01:ZmNmOWU1ZjA5ZjUxNTYyNjFkMGY0MmM1NmVhOGFkODDesNza: 00:14:47.890 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.890 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:47.890 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.890 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.147 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.147 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:48.147 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:48.147 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:48.405 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:14:48.405 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:48.405 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:48.405 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:48.405 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:48.405 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.405 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:48.405 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.405 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.405 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.405 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:48.405 18:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:48.662 00:14:48.662 18:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:48.662 18:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.662 18:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:48.920 18:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.920 18:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.920 18:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.920 18:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.920 18:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.920 18:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.920 { 00:14:48.920 "cntlid": 79, 00:14:48.920 "qid": 0, 00:14:48.920 "state": "enabled", 00:14:48.920 "thread": "nvmf_tgt_poll_group_000", 00:14:48.920 "listen_address": { 00:14:48.920 "trtype": "TCP", 00:14:48.920 "adrfam": "IPv4", 00:14:48.920 "traddr": "10.0.0.2", 00:14:48.920 "trsvcid": "4420" 00:14:48.920 }, 00:14:48.920 "peer_address": { 00:14:48.920 "trtype": "TCP", 00:14:48.920 "adrfam": "IPv4", 00:14:48.920 "traddr": "10.0.0.1", 00:14:48.920 "trsvcid": "45844" 00:14:48.920 }, 00:14:48.920 "auth": { 00:14:48.920 "state": "completed", 00:14:48.920 "digest": "sha384", 00:14:48.920 "dhgroup": "ffdhe4096" 00:14:48.920 } 00:14:48.920 } 00:14:48.920 ]' 00:14:48.920 18:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.920 18:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:48.920 18:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:49.178 18:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:49.178 18:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:49.178 18:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.178 18:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.178 18:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.435 18:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDEyNDlkZGVlMjRmY2EwYWJlM2EwMWZiMzY1MmUxMjBjYjU2ZGVlMmY2N2ZjNjFlMjdhYzBlYzIzNmMyMDRhZCsG6tQ=: 00:14:50.378 18:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.378 18:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:50.378 18:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.378 18:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.378 18:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.378 18:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:50.378 18:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:50.378 18:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:50.378 18:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:50.638 18:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:14:50.638 18:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:50.638 18:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:50.638 18:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:50.638 18:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:50.638 18:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.638 18:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.638 18:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.638 18:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.638 18:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.638 18:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.638 18:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.223 00:14:51.223 18:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:51.223 18:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:51.223 18:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.497 18:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.497 18:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.497 18:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.497 18:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.497 18:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.498 18:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:51.498 { 00:14:51.498 "cntlid": 81, 00:14:51.498 "qid": 0, 00:14:51.498 "state": "enabled", 00:14:51.498 "thread": "nvmf_tgt_poll_group_000", 00:14:51.498 "listen_address": { 00:14:51.498 "trtype": "TCP", 00:14:51.498 "adrfam": "IPv4", 00:14:51.498 "traddr": "10.0.0.2", 00:14:51.498 "trsvcid": "4420" 00:14:51.498 }, 00:14:51.498 "peer_address": { 00:14:51.498 "trtype": "TCP", 00:14:51.498 "adrfam": "IPv4", 00:14:51.498 "traddr": "10.0.0.1", 00:14:51.498 "trsvcid": "45890" 00:14:51.498 }, 00:14:51.498 "auth": { 00:14:51.498 "state": "completed", 00:14:51.498 "digest": "sha384", 00:14:51.498 "dhgroup": "ffdhe6144" 00:14:51.498 } 00:14:51.498 } 00:14:51.498 ]' 00:14:51.498 18:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.498 18:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:51.498 18:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.755 18:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:51.755 18:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:51.755 18:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.756 18:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.756 18:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.013 18:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTFlMzJiNDE4YWE3OWQ4NzM4YzAyZDkyODAyYWE0NDJkOTU2NDM4NWEzMWU1MTRmJe5BTw==: --dhchap-ctrl-secret DHHC-1:03:NDllYTlkZmM3MmQ1ZjNhMjI2NGJhNWYyNjNhNTgxNDE3YzIyMmVhM2U5NzVmMDYxNzIwMTA1ZGQzN2M3MjE1MNOdqyA=: 00:14:52.947 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.947 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:52.947 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.947 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.947 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.947 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:52.947 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:52.947 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:53.204 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:14:53.204 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:53.204 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:53.204 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:53.204 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:53.204 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.204 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.204 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.205 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.205 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.205 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.205 18:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.769 00:14:53.769 18:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.769 18:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:53.769 18:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.027 18:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.027 18:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.027 18:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.027 18:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.027 18:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.027 18:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.027 { 00:14:54.027 "cntlid": 83, 00:14:54.027 "qid": 0, 00:14:54.027 "state": "enabled", 00:14:54.027 "thread": "nvmf_tgt_poll_group_000", 00:14:54.027 "listen_address": { 00:14:54.027 "trtype": "TCP", 00:14:54.027 "adrfam": "IPv4", 00:14:54.027 "traddr": "10.0.0.2", 00:14:54.027 "trsvcid": "4420" 00:14:54.027 }, 00:14:54.027 "peer_address": { 00:14:54.027 "trtype": "TCP", 00:14:54.027 "adrfam": "IPv4", 00:14:54.027 "traddr": "10.0.0.1", 00:14:54.027 "trsvcid": "45912" 00:14:54.027 }, 00:14:54.027 "auth": { 00:14:54.027 "state": "completed", 00:14:54.027 "digest": "sha384", 00:14:54.027 "dhgroup": "ffdhe6144" 00:14:54.027 } 00:14:54.027 } 00:14:54.027 ]' 00:14:54.027 18:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.027 18:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:54.027 18:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.027 18:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:54.027 18:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.027 18:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.027 18:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.027 18:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.285 18:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YjA4YmI0YTBlNTIyNzUzYjFiNjZiMjYxYTg5MjFkYzhLPinR: --dhchap-ctrl-secret DHHC-1:02:NjJlOTYxMmYzODE0NzViYThjMWRjZjA5ZDIzYTEzMmJkNDRjMWYwNGQ0NjMzYjZiLvkyDg==: 00:14:55.219 18:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.476 18:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:55.476 18:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.476 18:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.476 18:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.476 18:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:55.476 18:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:55.476 18:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:55.734 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:14:55.734 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.734 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:55.734 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:55.734 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:55.734 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.734 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.734 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.734 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.734 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.734 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.734 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.298 00:14:56.298 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.298 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.298 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.556 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.556 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.556 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.556 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.556 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.556 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.556 { 00:14:56.556 "cntlid": 85, 00:14:56.556 "qid": 0, 00:14:56.556 "state": "enabled", 00:14:56.556 "thread": "nvmf_tgt_poll_group_000", 00:14:56.556 "listen_address": { 00:14:56.556 "trtype": "TCP", 00:14:56.556 "adrfam": "IPv4", 00:14:56.556 "traddr": "10.0.0.2", 00:14:56.556 "trsvcid": "4420" 00:14:56.556 }, 00:14:56.556 "peer_address": { 00:14:56.556 "trtype": "TCP", 00:14:56.556 "adrfam": "IPv4", 00:14:56.556 "traddr": "10.0.0.1", 00:14:56.556 "trsvcid": "45944" 00:14:56.556 }, 00:14:56.556 "auth": { 00:14:56.556 "state": "completed", 00:14:56.556 "digest": "sha384", 00:14:56.556 "dhgroup": "ffdhe6144" 00:14:56.556 } 00:14:56.556 } 00:14:56.556 ]' 00:14:56.556 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.556 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:56.556 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.556 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:56.556 18:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.556 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.556 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.556 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.814 18:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjBiOTAyZWI3MDM5MjhjYzExNzNkY2MwMTQ0YmZjZTg2NTdlMzBhMzM0MDIzY2M0x+cEuw==: --dhchap-ctrl-secret DHHC-1:01:ZmNmOWU1ZjA5ZjUxNTYyNjFkMGY0MmM1NmVhOGFkODDesNza: 00:14:57.746 18:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.746 18:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:57.746 18:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.746 18:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.746 18:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.746 18:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.746 18:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:57.746 18:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:58.003 18:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:14:58.003 18:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:58.003 18:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:58.003 18:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:58.003 18:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:58.003 18:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.003 18:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:58.003 18:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.003 18:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.004 18:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.004 18:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:58.004 18:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:58.569 00:14:58.569 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:58.569 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.569 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.827 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.827 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.827 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.827 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.827 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.827 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.827 { 00:14:58.827 "cntlid": 87, 00:14:58.827 "qid": 0, 00:14:58.827 "state": "enabled", 00:14:58.827 "thread": "nvmf_tgt_poll_group_000", 00:14:58.827 "listen_address": { 00:14:58.827 "trtype": "TCP", 00:14:58.827 "adrfam": "IPv4", 00:14:58.827 "traddr": "10.0.0.2", 00:14:58.827 "trsvcid": "4420" 00:14:58.827 }, 00:14:58.827 "peer_address": { 00:14:58.827 "trtype": "TCP", 00:14:58.827 "adrfam": "IPv4", 00:14:58.827 "traddr": "10.0.0.1", 00:14:58.827 "trsvcid": "36010" 00:14:58.827 }, 00:14:58.827 "auth": { 00:14:58.827 "state": "completed", 00:14:58.827 "digest": "sha384", 00:14:58.827 "dhgroup": "ffdhe6144" 00:14:58.827 } 00:14:58.827 } 00:14:58.827 ]' 00:14:58.827 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.827 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:58.827 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:59.085 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:59.085 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:59.085 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.085 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.085 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.342 18:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDEyNDlkZGVlMjRmY2EwYWJlM2EwMWZiMzY1MmUxMjBjYjU2ZGVlMmY2N2ZjNjFlMjdhYzBlYzIzNmMyMDRhZCsG6tQ=: 00:15:00.275 18:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.275 18:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:00.275 18:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.275 18:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.275 18:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.275 18:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:00.275 18:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:00.275 18:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:00.275 18:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:00.532 18:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:00.533 18:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:00.533 18:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:00.533 18:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:00.533 18:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:00.533 18:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.533 18:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.533 18:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.533 18:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.533 18:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.533 18:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.533 18:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.466 00:15:01.466 18:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.466 18:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.466 18:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.723 18:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.723 18:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.723 18:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.723 18:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.723 18:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.723 18:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:01.723 { 00:15:01.723 "cntlid": 89, 00:15:01.723 "qid": 0, 00:15:01.723 "state": "enabled", 00:15:01.723 "thread": "nvmf_tgt_poll_group_000", 00:15:01.723 "listen_address": { 00:15:01.723 "trtype": "TCP", 00:15:01.723 "adrfam": "IPv4", 00:15:01.723 "traddr": "10.0.0.2", 00:15:01.723 "trsvcid": "4420" 00:15:01.723 }, 00:15:01.723 "peer_address": { 00:15:01.723 "trtype": "TCP", 00:15:01.723 "adrfam": "IPv4", 00:15:01.723 "traddr": "10.0.0.1", 00:15:01.723 "trsvcid": "36038" 00:15:01.723 }, 00:15:01.724 "auth": { 00:15:01.724 "state": "completed", 00:15:01.724 "digest": "sha384", 00:15:01.724 "dhgroup": "ffdhe8192" 00:15:01.724 } 00:15:01.724 } 00:15:01.724 ]' 00:15:01.724 18:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:01.724 18:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:01.724 18:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.724 18:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:01.724 18:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.724 18:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.724 18:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.724 18:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.981 18:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTFlMzJiNDE4YWE3OWQ4NzM4YzAyZDkyODAyYWE0NDJkOTU2NDM4NWEzMWU1MTRmJe5BTw==: --dhchap-ctrl-secret DHHC-1:03:NDllYTlkZmM3MmQ1ZjNhMjI2NGJhNWYyNjNhNTgxNDE3YzIyMmVhM2U5NzVmMDYxNzIwMTA1ZGQzN2M3MjE1MNOdqyA=: 00:15:03.353 18:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.353 18:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:03.353 18:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.353 18:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.353 18:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.353 18:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:03.353 18:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:03.353 18:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:03.353 18:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:03.353 18:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:03.353 18:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:03.353 18:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:03.353 18:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:03.353 18:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.353 18:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.353 18:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.353 18:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.353 18:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.353 18:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.353 18:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.286 00:15:04.286 18:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.286 18:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.286 18:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.544 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.544 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.544 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.544 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.544 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.544 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.544 { 00:15:04.544 "cntlid": 91, 00:15:04.544 "qid": 0, 00:15:04.544 "state": "enabled", 00:15:04.545 "thread": "nvmf_tgt_poll_group_000", 00:15:04.545 "listen_address": { 00:15:04.545 "trtype": "TCP", 00:15:04.545 "adrfam": "IPv4", 00:15:04.545 "traddr": "10.0.0.2", 00:15:04.545 "trsvcid": "4420" 00:15:04.545 }, 00:15:04.545 "peer_address": { 00:15:04.545 "trtype": "TCP", 00:15:04.545 "adrfam": "IPv4", 00:15:04.545 "traddr": "10.0.0.1", 00:15:04.545 "trsvcid": "36066" 00:15:04.545 }, 00:15:04.545 "auth": { 00:15:04.545 "state": "completed", 00:15:04.545 "digest": "sha384", 00:15:04.545 "dhgroup": "ffdhe8192" 00:15:04.545 } 00:15:04.545 } 00:15:04.545 ]' 00:15:04.545 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.545 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:04.545 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:04.545 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:04.545 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:04.802 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.802 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.802 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.060 18:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YjA4YmI0YTBlNTIyNzUzYjFiNjZiMjYxYTg5MjFkYzhLPinR: --dhchap-ctrl-secret DHHC-1:02:NjJlOTYxMmYzODE0NzViYThjMWRjZjA5ZDIzYTEzMmJkNDRjMWYwNGQ0NjMzYjZiLvkyDg==: 00:15:05.990 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.990 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:05.990 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.990 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.990 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.990 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:05.990 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:05.991 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:06.247 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:06.248 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.248 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:06.248 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:06.248 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:06.248 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.248 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.248 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.248 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.248 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.248 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.248 18:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.180 00:15:07.180 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.180 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.180 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.437 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.437 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.437 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.437 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.437 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.437 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.437 { 00:15:07.437 "cntlid": 93, 00:15:07.437 "qid": 0, 00:15:07.437 "state": "enabled", 00:15:07.437 "thread": "nvmf_tgt_poll_group_000", 00:15:07.437 "listen_address": { 00:15:07.437 "trtype": "TCP", 00:15:07.437 "adrfam": "IPv4", 00:15:07.437 "traddr": "10.0.0.2", 00:15:07.437 "trsvcid": "4420" 00:15:07.437 }, 00:15:07.437 "peer_address": { 00:15:07.437 "trtype": "TCP", 00:15:07.437 "adrfam": "IPv4", 00:15:07.437 "traddr": "10.0.0.1", 00:15:07.437 "trsvcid": "36078" 00:15:07.437 }, 00:15:07.437 "auth": { 00:15:07.437 "state": "completed", 00:15:07.437 "digest": "sha384", 00:15:07.437 "dhgroup": "ffdhe8192" 00:15:07.437 } 00:15:07.437 } 00:15:07.437 ]' 00:15:07.437 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.437 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:07.437 18:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.437 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:07.437 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.437 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.437 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.437 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.694 18:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjBiOTAyZWI3MDM5MjhjYzExNzNkY2MwMTQ0YmZjZTg2NTdlMzBhMzM0MDIzY2M0x+cEuw==: --dhchap-ctrl-secret DHHC-1:01:ZmNmOWU1ZjA5ZjUxNTYyNjFkMGY0MmM1NmVhOGFkODDesNza: 00:15:09.064 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.064 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:09.064 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.064 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.064 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.064 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:09.064 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:09.064 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:09.064 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:09.064 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.064 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:09.064 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:09.064 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:09.064 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.064 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:09.064 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.064 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.064 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.064 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:09.064 18:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:09.994 00:15:09.994 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.994 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.994 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.252 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.252 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.252 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.252 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.252 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.252 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.252 { 00:15:10.252 "cntlid": 95, 00:15:10.252 "qid": 0, 00:15:10.252 "state": "enabled", 00:15:10.252 "thread": "nvmf_tgt_poll_group_000", 00:15:10.252 "listen_address": { 00:15:10.252 "trtype": "TCP", 00:15:10.252 "adrfam": "IPv4", 00:15:10.252 "traddr": "10.0.0.2", 00:15:10.252 "trsvcid": "4420" 00:15:10.252 }, 00:15:10.252 "peer_address": { 00:15:10.252 "trtype": "TCP", 00:15:10.252 "adrfam": "IPv4", 00:15:10.252 "traddr": "10.0.0.1", 00:15:10.252 "trsvcid": "57128" 00:15:10.252 }, 00:15:10.252 "auth": { 00:15:10.252 "state": "completed", 00:15:10.252 "digest": "sha384", 00:15:10.252 "dhgroup": "ffdhe8192" 00:15:10.252 } 00:15:10.252 } 00:15:10.252 ]' 00:15:10.252 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.252 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.252 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.509 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:10.509 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.509 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.509 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.509 18:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.784 18:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDEyNDlkZGVlMjRmY2EwYWJlM2EwMWZiMzY1MmUxMjBjYjU2ZGVlMmY2N2ZjNjFlMjdhYzBlYzIzNmMyMDRhZCsG6tQ=: 00:15:11.737 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.737 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:11.737 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.737 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.737 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.737 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:11.737 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:11.737 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.737 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:11.737 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:11.995 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:11.995 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.995 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:11.995 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:11.995 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:11.995 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.995 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.995 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.995 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.995 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.995 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.995 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.252 00:15:12.252 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.252 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.252 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.510 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.510 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.510 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.510 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.510 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.510 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.510 { 00:15:12.510 "cntlid": 97, 00:15:12.510 "qid": 0, 00:15:12.510 "state": "enabled", 00:15:12.510 "thread": "nvmf_tgt_poll_group_000", 00:15:12.510 "listen_address": { 00:15:12.510 "trtype": "TCP", 00:15:12.510 "adrfam": "IPv4", 00:15:12.510 "traddr": "10.0.0.2", 00:15:12.510 "trsvcid": "4420" 00:15:12.510 }, 00:15:12.510 "peer_address": { 00:15:12.510 "trtype": "TCP", 00:15:12.510 "adrfam": "IPv4", 00:15:12.510 "traddr": "10.0.0.1", 00:15:12.510 "trsvcid": "57152" 00:15:12.510 }, 00:15:12.510 "auth": { 00:15:12.510 "state": "completed", 00:15:12.510 "digest": "sha512", 00:15:12.510 "dhgroup": "null" 00:15:12.510 } 00:15:12.510 } 00:15:12.510 ]' 00:15:12.510 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.510 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:12.510 18:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.510 18:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:12.510 18:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.510 18:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.510 18:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.510 18:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.768 18:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTFlMzJiNDE4YWE3OWQ4NzM4YzAyZDkyODAyYWE0NDJkOTU2NDM4NWEzMWU1MTRmJe5BTw==: --dhchap-ctrl-secret DHHC-1:03:NDllYTlkZmM3MmQ1ZjNhMjI2NGJhNWYyNjNhNTgxNDE3YzIyMmVhM2U5NzVmMDYxNzIwMTA1ZGQzN2M3MjE1MNOdqyA=: 00:15:13.700 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.700 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:13.700 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.700 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.700 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.700 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.700 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:13.700 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:13.958 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:13.958 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.958 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:13.958 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:13.958 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:13.958 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.958 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.958 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.958 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.958 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.958 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.958 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.522 00:15:14.522 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.522 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.522 18:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.522 18:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.522 18:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.522 18:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.522 18:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.522 18:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.522 18:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.522 { 00:15:14.522 "cntlid": 99, 00:15:14.522 "qid": 0, 00:15:14.522 "state": "enabled", 00:15:14.522 "thread": "nvmf_tgt_poll_group_000", 00:15:14.522 "listen_address": { 00:15:14.522 "trtype": "TCP", 00:15:14.522 "adrfam": "IPv4", 00:15:14.522 "traddr": "10.0.0.2", 00:15:14.522 "trsvcid": "4420" 00:15:14.522 }, 00:15:14.522 "peer_address": { 00:15:14.522 "trtype": "TCP", 00:15:14.522 "adrfam": "IPv4", 00:15:14.522 "traddr": "10.0.0.1", 00:15:14.522 "trsvcid": "57180" 00:15:14.522 }, 00:15:14.522 "auth": { 00:15:14.522 "state": "completed", 00:15:14.522 "digest": "sha512", 00:15:14.522 "dhgroup": "null" 00:15:14.522 } 00:15:14.522 } 00:15:14.522 ]' 00:15:14.522 18:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.779 18:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:14.779 18:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.779 18:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:14.779 18:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.779 18:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.779 18:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.779 18:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.036 18:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YjA4YmI0YTBlNTIyNzUzYjFiNjZiMjYxYTg5MjFkYzhLPinR: --dhchap-ctrl-secret DHHC-1:02:NjJlOTYxMmYzODE0NzViYThjMWRjZjA5ZDIzYTEzMmJkNDRjMWYwNGQ0NjMzYjZiLvkyDg==: 00:15:15.972 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.972 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:15.972 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.972 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.972 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.972 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.972 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:15.972 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:16.228 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:16.228 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.228 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:16.228 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:16.228 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:16.228 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.228 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.228 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.228 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.228 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.228 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.228 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.485 00:15:16.485 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.485 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.485 18:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.743 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.743 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.743 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.743 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.743 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.743 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.743 { 00:15:16.743 "cntlid": 101, 00:15:16.743 "qid": 0, 00:15:16.743 "state": "enabled", 00:15:16.743 "thread": "nvmf_tgt_poll_group_000", 00:15:16.743 "listen_address": { 00:15:16.743 "trtype": "TCP", 00:15:16.743 "adrfam": "IPv4", 00:15:16.743 "traddr": "10.0.0.2", 00:15:16.743 "trsvcid": "4420" 00:15:16.743 }, 00:15:16.743 "peer_address": { 00:15:16.743 "trtype": "TCP", 00:15:16.743 "adrfam": "IPv4", 00:15:16.743 "traddr": "10.0.0.1", 00:15:16.743 "trsvcid": "57188" 00:15:16.743 }, 00:15:16.743 "auth": { 00:15:16.743 "state": "completed", 00:15:16.743 "digest": "sha512", 00:15:16.743 "dhgroup": "null" 00:15:16.743 } 00:15:16.743 } 00:15:16.743 ]' 00:15:16.743 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.743 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:16.743 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.743 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:16.743 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.743 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.743 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.743 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.000 18:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjBiOTAyZWI3MDM5MjhjYzExNzNkY2MwMTQ0YmZjZTg2NTdlMzBhMzM0MDIzY2M0x+cEuw==: --dhchap-ctrl-secret DHHC-1:01:ZmNmOWU1ZjA5ZjUxNTYyNjFkMGY0MmM1NmVhOGFkODDesNza: 00:15:17.930 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.930 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:17.930 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.930 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.930 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.930 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:17.930 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:17.931 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:18.189 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:15:18.189 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.189 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:18.189 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:18.189 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:18.189 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.189 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:18.189 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.189 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.189 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.189 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:18.189 18:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:18.754 00:15:18.754 18:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.754 18:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.754 18:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.012 18:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.012 18:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.012 18:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.012 18:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.012 18:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.012 18:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:19.012 { 00:15:19.012 "cntlid": 103, 00:15:19.012 "qid": 0, 00:15:19.012 "state": "enabled", 00:15:19.012 "thread": "nvmf_tgt_poll_group_000", 00:15:19.012 "listen_address": { 00:15:19.012 "trtype": "TCP", 00:15:19.012 "adrfam": "IPv4", 00:15:19.012 "traddr": "10.0.0.2", 00:15:19.012 "trsvcid": "4420" 00:15:19.012 }, 00:15:19.012 "peer_address": { 00:15:19.012 "trtype": "TCP", 00:15:19.012 "adrfam": "IPv4", 00:15:19.012 "traddr": "10.0.0.1", 00:15:19.012 "trsvcid": "42770" 00:15:19.012 }, 00:15:19.012 "auth": { 00:15:19.012 "state": "completed", 00:15:19.012 "digest": "sha512", 00:15:19.012 "dhgroup": "null" 00:15:19.012 } 00:15:19.012 } 00:15:19.012 ]' 00:15:19.012 18:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:19.012 18:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:19.012 18:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:19.012 18:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:19.012 18:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:19.012 18:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.012 18:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.012 18:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.270 18:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDEyNDlkZGVlMjRmY2EwYWJlM2EwMWZiMzY1MmUxMjBjYjU2ZGVlMmY2N2ZjNjFlMjdhYzBlYzIzNmMyMDRhZCsG6tQ=: 00:15:20.203 18:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.203 18:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:20.203 18:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.203 18:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.460 18:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.460 18:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:20.461 18:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:20.461 18:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:20.461 18:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:20.461 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:15:20.461 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.461 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:20.461 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:20.461 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:20.461 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.461 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.461 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.461 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.718 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.718 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.718 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.975 00:15:20.975 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.975 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.975 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.233 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.233 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.233 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.233 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.233 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.233 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:21.233 { 00:15:21.233 "cntlid": 105, 00:15:21.233 "qid": 0, 00:15:21.233 "state": "enabled", 00:15:21.233 "thread": "nvmf_tgt_poll_group_000", 00:15:21.233 "listen_address": { 00:15:21.233 "trtype": "TCP", 00:15:21.233 "adrfam": "IPv4", 00:15:21.233 "traddr": "10.0.0.2", 00:15:21.233 "trsvcid": "4420" 00:15:21.233 }, 00:15:21.233 "peer_address": { 00:15:21.233 "trtype": "TCP", 00:15:21.233 "adrfam": "IPv4", 00:15:21.233 "traddr": "10.0.0.1", 00:15:21.233 "trsvcid": "42796" 00:15:21.233 }, 00:15:21.233 "auth": { 00:15:21.233 "state": "completed", 00:15:21.233 "digest": "sha512", 00:15:21.233 "dhgroup": "ffdhe2048" 00:15:21.233 } 00:15:21.233 } 00:15:21.233 ]' 00:15:21.233 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:21.233 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:21.233 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:21.233 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:21.233 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:21.233 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.233 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.233 18:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.491 18:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTFlMzJiNDE4YWE3OWQ4NzM4YzAyZDkyODAyYWE0NDJkOTU2NDM4NWEzMWU1MTRmJe5BTw==: --dhchap-ctrl-secret DHHC-1:03:NDllYTlkZmM3MmQ1ZjNhMjI2NGJhNWYyNjNhNTgxNDE3YzIyMmVhM2U5NzVmMDYxNzIwMTA1ZGQzN2M3MjE1MNOdqyA=: 00:15:22.423 18:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.423 18:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:22.423 18:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.423 18:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.423 18:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.423 18:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.423 18:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:22.423 18:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:22.681 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:15:22.681 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.681 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:22.681 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:22.681 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:22.681 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.681 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.681 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.681 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.681 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.681 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.681 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.939 00:15:22.939 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:22.939 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:22.939 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.197 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.197 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.197 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.197 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.197 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.197 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:23.197 { 00:15:23.197 "cntlid": 107, 00:15:23.197 "qid": 0, 00:15:23.197 "state": "enabled", 00:15:23.197 "thread": "nvmf_tgt_poll_group_000", 00:15:23.197 "listen_address": { 00:15:23.197 "trtype": "TCP", 00:15:23.197 "adrfam": "IPv4", 00:15:23.197 "traddr": "10.0.0.2", 00:15:23.197 "trsvcid": "4420" 00:15:23.197 }, 00:15:23.197 "peer_address": { 00:15:23.197 "trtype": "TCP", 00:15:23.197 "adrfam": "IPv4", 00:15:23.197 "traddr": "10.0.0.1", 00:15:23.197 "trsvcid": "42826" 00:15:23.197 }, 00:15:23.197 "auth": { 00:15:23.197 "state": "completed", 00:15:23.197 "digest": "sha512", 00:15:23.197 "dhgroup": "ffdhe2048" 00:15:23.197 } 00:15:23.197 } 00:15:23.197 ]' 00:15:23.197 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:23.455 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:23.455 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:23.455 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:23.455 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:23.455 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.455 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.455 18:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.713 18:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YjA4YmI0YTBlNTIyNzUzYjFiNjZiMjYxYTg5MjFkYzhLPinR: --dhchap-ctrl-secret DHHC-1:02:NjJlOTYxMmYzODE0NzViYThjMWRjZjA5ZDIzYTEzMmJkNDRjMWYwNGQ0NjMzYjZiLvkyDg==: 00:15:24.645 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.645 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:24.645 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.645 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.645 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.645 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:24.645 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:24.645 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:24.903 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:15:24.903 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:24.903 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:24.903 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:24.903 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:24.903 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.903 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.903 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.903 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.903 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.903 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.903 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.161 00:15:25.161 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:25.161 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:25.161 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.419 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.419 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.419 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.419 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.419 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.419 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:25.419 { 00:15:25.419 "cntlid": 109, 00:15:25.419 "qid": 0, 00:15:25.419 "state": "enabled", 00:15:25.419 "thread": "nvmf_tgt_poll_group_000", 00:15:25.419 "listen_address": { 00:15:25.419 "trtype": "TCP", 00:15:25.419 "adrfam": "IPv4", 00:15:25.419 "traddr": "10.0.0.2", 00:15:25.419 "trsvcid": "4420" 00:15:25.419 }, 00:15:25.419 "peer_address": { 00:15:25.419 "trtype": "TCP", 00:15:25.419 "adrfam": "IPv4", 00:15:25.419 "traddr": "10.0.0.1", 00:15:25.419 "trsvcid": "42866" 00:15:25.419 }, 00:15:25.419 "auth": { 00:15:25.419 "state": "completed", 00:15:25.419 "digest": "sha512", 00:15:25.419 "dhgroup": "ffdhe2048" 00:15:25.419 } 00:15:25.419 } 00:15:25.419 ]' 00:15:25.419 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:25.419 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:25.419 18:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:25.676 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:25.676 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:25.676 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.676 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.676 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.933 18:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjBiOTAyZWI3MDM5MjhjYzExNzNkY2MwMTQ0YmZjZTg2NTdlMzBhMzM0MDIzY2M0x+cEuw==: --dhchap-ctrl-secret DHHC-1:01:ZmNmOWU1ZjA5ZjUxNTYyNjFkMGY0MmM1NmVhOGFkODDesNza: 00:15:26.865 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.865 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:26.865 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.865 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.865 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.865 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:26.865 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:26.865 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:27.123 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:15:27.123 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:27.123 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:27.123 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:27.123 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:27.123 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.123 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:27.123 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.123 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.123 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.123 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:27.123 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:27.390 00:15:27.390 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:27.390 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:27.390 18:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.683 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.683 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.683 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.683 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.683 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.683 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:27.683 { 00:15:27.683 "cntlid": 111, 00:15:27.683 "qid": 0, 00:15:27.683 "state": "enabled", 00:15:27.683 "thread": "nvmf_tgt_poll_group_000", 00:15:27.683 "listen_address": { 00:15:27.683 "trtype": "TCP", 00:15:27.683 "adrfam": "IPv4", 00:15:27.683 "traddr": "10.0.0.2", 00:15:27.683 "trsvcid": "4420" 00:15:27.683 }, 00:15:27.683 "peer_address": { 00:15:27.683 "trtype": "TCP", 00:15:27.683 "adrfam": "IPv4", 00:15:27.683 "traddr": "10.0.0.1", 00:15:27.683 "trsvcid": "42904" 00:15:27.683 }, 00:15:27.683 "auth": { 00:15:27.683 "state": "completed", 00:15:27.683 "digest": "sha512", 00:15:27.683 "dhgroup": "ffdhe2048" 00:15:27.683 } 00:15:27.683 } 00:15:27.683 ]' 00:15:27.683 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:27.683 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:27.683 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:27.941 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:27.941 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:27.941 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.941 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.941 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.199 18:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDEyNDlkZGVlMjRmY2EwYWJlM2EwMWZiMzY1MmUxMjBjYjU2ZGVlMmY2N2ZjNjFlMjdhYzBlYzIzNmMyMDRhZCsG6tQ=: 00:15:29.131 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.131 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:29.131 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.131 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.131 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.131 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:29.131 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.131 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:29.131 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:29.388 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:15:29.388 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:29.388 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:29.388 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:29.388 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:29.388 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.388 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.388 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.388 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.388 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.388 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.388 18:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.646 00:15:29.646 18:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.646 18:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.646 18:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.903 18:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.903 18:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.903 18:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.903 18:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.903 18:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.903 18:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:29.903 { 00:15:29.903 "cntlid": 113, 00:15:29.903 "qid": 0, 00:15:29.903 "state": "enabled", 00:15:29.903 "thread": "nvmf_tgt_poll_group_000", 00:15:29.903 "listen_address": { 00:15:29.903 "trtype": "TCP", 00:15:29.903 "adrfam": "IPv4", 00:15:29.903 "traddr": "10.0.0.2", 00:15:29.903 "trsvcid": "4420" 00:15:29.903 }, 00:15:29.903 "peer_address": { 00:15:29.903 "trtype": "TCP", 00:15:29.903 "adrfam": "IPv4", 00:15:29.903 "traddr": "10.0.0.1", 00:15:29.903 "trsvcid": "36202" 00:15:29.903 }, 00:15:29.903 "auth": { 00:15:29.903 "state": "completed", 00:15:29.903 "digest": "sha512", 00:15:29.903 "dhgroup": "ffdhe3072" 00:15:29.903 } 00:15:29.903 } 00:15:29.903 ]' 00:15:29.903 18:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.161 18:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:30.161 18:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.161 18:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:30.161 18:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:30.161 18:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.161 18:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.161 18:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.418 18:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTFlMzJiNDE4YWE3OWQ4NzM4YzAyZDkyODAyYWE0NDJkOTU2NDM4NWEzMWU1MTRmJe5BTw==: --dhchap-ctrl-secret DHHC-1:03:NDllYTlkZmM3MmQ1ZjNhMjI2NGJhNWYyNjNhNTgxNDE3YzIyMmVhM2U5NzVmMDYxNzIwMTA1ZGQzN2M3MjE1MNOdqyA=: 00:15:31.350 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.350 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:31.350 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.350 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.350 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.350 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:31.350 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:31.350 18:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:31.608 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:15:31.608 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:31.608 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:31.608 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:31.608 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:31.608 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.608 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.608 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.608 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.608 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.608 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.608 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.865 00:15:31.865 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:31.865 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:31.865 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.122 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.122 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.122 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.122 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.122 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.122 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:32.122 { 00:15:32.122 "cntlid": 115, 00:15:32.122 "qid": 0, 00:15:32.122 "state": "enabled", 00:15:32.122 "thread": "nvmf_tgt_poll_group_000", 00:15:32.122 "listen_address": { 00:15:32.122 "trtype": "TCP", 00:15:32.122 "adrfam": "IPv4", 00:15:32.122 "traddr": "10.0.0.2", 00:15:32.122 "trsvcid": "4420" 00:15:32.122 }, 00:15:32.122 "peer_address": { 00:15:32.122 "trtype": "TCP", 00:15:32.122 "adrfam": "IPv4", 00:15:32.122 "traddr": "10.0.0.1", 00:15:32.122 "trsvcid": "36220" 00:15:32.122 }, 00:15:32.122 "auth": { 00:15:32.122 "state": "completed", 00:15:32.122 "digest": "sha512", 00:15:32.122 "dhgroup": "ffdhe3072" 00:15:32.122 } 00:15:32.122 } 00:15:32.122 ]' 00:15:32.122 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:32.380 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:32.380 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:32.380 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:32.380 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:32.380 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.380 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.380 18:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.637 18:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YjA4YmI0YTBlNTIyNzUzYjFiNjZiMjYxYTg5MjFkYzhLPinR: --dhchap-ctrl-secret DHHC-1:02:NjJlOTYxMmYzODE0NzViYThjMWRjZjA5ZDIzYTEzMmJkNDRjMWYwNGQ0NjMzYjZiLvkyDg==: 00:15:33.570 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.570 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:33.570 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.570 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.570 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.570 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:33.570 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:33.570 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:33.826 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:15:33.826 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:33.826 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:33.826 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:33.826 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:33.826 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.826 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.826 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.826 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.826 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.826 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.826 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.083 00:15:34.083 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:34.083 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:34.083 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.340 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.340 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.340 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.340 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.340 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.340 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:34.340 { 00:15:34.340 "cntlid": 117, 00:15:34.340 "qid": 0, 00:15:34.340 "state": "enabled", 00:15:34.340 "thread": "nvmf_tgt_poll_group_000", 00:15:34.340 "listen_address": { 00:15:34.340 "trtype": "TCP", 00:15:34.340 "adrfam": "IPv4", 00:15:34.340 "traddr": "10.0.0.2", 00:15:34.340 "trsvcid": "4420" 00:15:34.340 }, 00:15:34.340 "peer_address": { 00:15:34.340 "trtype": "TCP", 00:15:34.340 "adrfam": "IPv4", 00:15:34.340 "traddr": "10.0.0.1", 00:15:34.340 "trsvcid": "36244" 00:15:34.340 }, 00:15:34.340 "auth": { 00:15:34.340 "state": "completed", 00:15:34.340 "digest": "sha512", 00:15:34.340 "dhgroup": "ffdhe3072" 00:15:34.340 } 00:15:34.340 } 00:15:34.340 ]' 00:15:34.340 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:34.597 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:34.597 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:34.597 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:34.597 18:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:34.597 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.597 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.597 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.854 18:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjBiOTAyZWI3MDM5MjhjYzExNzNkY2MwMTQ0YmZjZTg2NTdlMzBhMzM0MDIzY2M0x+cEuw==: --dhchap-ctrl-secret DHHC-1:01:ZmNmOWU1ZjA5ZjUxNTYyNjFkMGY0MmM1NmVhOGFkODDesNza: 00:15:35.785 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.785 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:35.785 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.785 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.785 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.785 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:35.785 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:35.785 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:36.044 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:15:36.044 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.044 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:36.044 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:36.044 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:36.044 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.044 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:36.045 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.045 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.045 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.045 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:36.045 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:36.302 00:15:36.302 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.302 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.302 18:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.559 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.559 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.559 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.559 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.559 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.559 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:36.559 { 00:15:36.559 "cntlid": 119, 00:15:36.559 "qid": 0, 00:15:36.559 "state": "enabled", 00:15:36.559 "thread": "nvmf_tgt_poll_group_000", 00:15:36.559 "listen_address": { 00:15:36.559 "trtype": "TCP", 00:15:36.559 "adrfam": "IPv4", 00:15:36.559 "traddr": "10.0.0.2", 00:15:36.559 "trsvcid": "4420" 00:15:36.559 }, 00:15:36.559 "peer_address": { 00:15:36.559 "trtype": "TCP", 00:15:36.559 "adrfam": "IPv4", 00:15:36.559 "traddr": "10.0.0.1", 00:15:36.559 "trsvcid": "36262" 00:15:36.559 }, 00:15:36.559 "auth": { 00:15:36.559 "state": "completed", 00:15:36.559 "digest": "sha512", 00:15:36.559 "dhgroup": "ffdhe3072" 00:15:36.559 } 00:15:36.559 } 00:15:36.559 ]' 00:15:36.559 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:36.821 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:36.821 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:36.821 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:36.821 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:36.821 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.821 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.821 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.079 18:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDEyNDlkZGVlMjRmY2EwYWJlM2EwMWZiMzY1MmUxMjBjYjU2ZGVlMmY2N2ZjNjFlMjdhYzBlYzIzNmMyMDRhZCsG6tQ=: 00:15:38.011 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.011 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:38.011 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.011 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.011 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.011 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:38.011 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.011 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:38.011 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:38.269 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:15:38.269 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.269 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:38.269 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:38.269 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:38.269 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.269 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.269 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.269 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.269 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.269 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.269 18:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.527 00:15:38.527 18:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:38.527 18:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:38.527 18:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.785 18:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.785 18:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.785 18:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.785 18:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.785 18:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.785 18:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:38.785 { 00:15:38.785 "cntlid": 121, 00:15:38.785 "qid": 0, 00:15:38.785 "state": "enabled", 00:15:38.785 "thread": "nvmf_tgt_poll_group_000", 00:15:38.785 "listen_address": { 00:15:38.785 "trtype": "TCP", 00:15:38.785 "adrfam": "IPv4", 00:15:38.785 "traddr": "10.0.0.2", 00:15:38.785 "trsvcid": "4420" 00:15:38.785 }, 00:15:38.785 "peer_address": { 00:15:38.785 "trtype": "TCP", 00:15:38.785 "adrfam": "IPv4", 00:15:38.785 "traddr": "10.0.0.1", 00:15:38.785 "trsvcid": "50886" 00:15:38.785 }, 00:15:38.785 "auth": { 00:15:38.785 "state": "completed", 00:15:38.785 "digest": "sha512", 00:15:38.785 "dhgroup": "ffdhe4096" 00:15:38.785 } 00:15:38.785 } 00:15:38.785 ]' 00:15:38.785 18:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.043 18:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:39.043 18:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.043 18:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:39.043 18:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.043 18:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.043 18:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.043 18:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.300 18:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTFlMzJiNDE4YWE3OWQ4NzM4YzAyZDkyODAyYWE0NDJkOTU2NDM4NWEzMWU1MTRmJe5BTw==: --dhchap-ctrl-secret DHHC-1:03:NDllYTlkZmM3MmQ1ZjNhMjI2NGJhNWYyNjNhNTgxNDE3YzIyMmVhM2U5NzVmMDYxNzIwMTA1ZGQzN2M3MjE1MNOdqyA=: 00:15:40.234 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.234 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:40.234 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.234 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.234 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.234 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.234 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:40.234 18:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:40.493 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:15:40.493 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.493 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:40.493 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:40.493 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:40.493 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.493 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.493 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.493 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.493 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.493 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.493 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.059 00:15:41.059 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.059 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.059 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.316 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.316 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.316 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.316 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.316 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.317 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.317 { 00:15:41.317 "cntlid": 123, 00:15:41.317 "qid": 0, 00:15:41.317 "state": "enabled", 00:15:41.317 "thread": "nvmf_tgt_poll_group_000", 00:15:41.317 "listen_address": { 00:15:41.317 "trtype": "TCP", 00:15:41.317 "adrfam": "IPv4", 00:15:41.317 "traddr": "10.0.0.2", 00:15:41.317 "trsvcid": "4420" 00:15:41.317 }, 00:15:41.317 "peer_address": { 00:15:41.317 "trtype": "TCP", 00:15:41.317 "adrfam": "IPv4", 00:15:41.317 "traddr": "10.0.0.1", 00:15:41.317 "trsvcid": "50908" 00:15:41.317 }, 00:15:41.317 "auth": { 00:15:41.317 "state": "completed", 00:15:41.317 "digest": "sha512", 00:15:41.317 "dhgroup": "ffdhe4096" 00:15:41.317 } 00:15:41.317 } 00:15:41.317 ]' 00:15:41.317 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.317 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:41.317 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.317 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:41.317 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.317 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.317 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.317 18:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.574 18:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YjA4YmI0YTBlNTIyNzUzYjFiNjZiMjYxYTg5MjFkYzhLPinR: --dhchap-ctrl-secret DHHC-1:02:NjJlOTYxMmYzODE0NzViYThjMWRjZjA5ZDIzYTEzMmJkNDRjMWYwNGQ0NjMzYjZiLvkyDg==: 00:15:42.507 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.507 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:42.507 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.507 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.507 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.507 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.507 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:42.507 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:42.765 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:15:42.765 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:42.765 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:42.765 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:42.765 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:42.765 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.765 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.765 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.765 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.765 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.765 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.766 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.331 00:15:43.331 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.331 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.331 18:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.589 18:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.589 18:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.589 18:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.589 18:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.589 18:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.589 18:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.589 { 00:15:43.589 "cntlid": 125, 00:15:43.589 "qid": 0, 00:15:43.589 "state": "enabled", 00:15:43.589 "thread": "nvmf_tgt_poll_group_000", 00:15:43.589 "listen_address": { 00:15:43.589 "trtype": "TCP", 00:15:43.589 "adrfam": "IPv4", 00:15:43.589 "traddr": "10.0.0.2", 00:15:43.589 "trsvcid": "4420" 00:15:43.589 }, 00:15:43.589 "peer_address": { 00:15:43.589 "trtype": "TCP", 00:15:43.589 "adrfam": "IPv4", 00:15:43.589 "traddr": "10.0.0.1", 00:15:43.589 "trsvcid": "50940" 00:15:43.589 }, 00:15:43.589 "auth": { 00:15:43.589 "state": "completed", 00:15:43.589 "digest": "sha512", 00:15:43.589 "dhgroup": "ffdhe4096" 00:15:43.589 } 00:15:43.589 } 00:15:43.589 ]' 00:15:43.589 18:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.589 18:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:43.589 18:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.589 18:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:43.589 18:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:43.589 18:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.589 18:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.589 18:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.847 18:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjBiOTAyZWI3MDM5MjhjYzExNzNkY2MwMTQ0YmZjZTg2NTdlMzBhMzM0MDIzY2M0x+cEuw==: --dhchap-ctrl-secret DHHC-1:01:ZmNmOWU1ZjA5ZjUxNTYyNjFkMGY0MmM1NmVhOGFkODDesNza: 00:15:45.244 18:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.244 18:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:45.244 18:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.244 18:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.244 18:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.244 18:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.244 18:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:45.244 18:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:45.244 18:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:15:45.244 18:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.244 18:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:45.244 18:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:45.244 18:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:45.244 18:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.244 18:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:45.244 18:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.244 18:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.244 18:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.244 18:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:45.244 18:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:45.516 00:15:45.516 18:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.516 18:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.516 18:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.774 18:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.774 18:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.774 18:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.774 18:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.774 18:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.774 18:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.774 { 00:15:45.774 "cntlid": 127, 00:15:45.774 "qid": 0, 00:15:45.774 "state": "enabled", 00:15:45.774 "thread": "nvmf_tgt_poll_group_000", 00:15:45.774 "listen_address": { 00:15:45.774 "trtype": "TCP", 00:15:45.774 "adrfam": "IPv4", 00:15:45.774 "traddr": "10.0.0.2", 00:15:45.774 "trsvcid": "4420" 00:15:45.774 }, 00:15:45.774 "peer_address": { 00:15:45.774 "trtype": "TCP", 00:15:45.774 "adrfam": "IPv4", 00:15:45.774 "traddr": "10.0.0.1", 00:15:45.774 "trsvcid": "50964" 00:15:45.774 }, 00:15:45.774 "auth": { 00:15:45.774 "state": "completed", 00:15:45.774 "digest": "sha512", 00:15:45.774 "dhgroup": "ffdhe4096" 00:15:45.774 } 00:15:45.774 } 00:15:45.774 ]' 00:15:45.774 18:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.774 18:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:45.774 18:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.031 18:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:46.031 18:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.031 18:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.031 18:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.031 18:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.288 18:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDEyNDlkZGVlMjRmY2EwYWJlM2EwMWZiMzY1MmUxMjBjYjU2ZGVlMmY2N2ZjNjFlMjdhYzBlYzIzNmMyMDRhZCsG6tQ=: 00:15:47.221 18:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.221 18:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:47.221 18:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.221 18:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.221 18:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.221 18:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:47.221 18:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.221 18:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:47.221 18:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:47.478 18:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:15:47.478 18:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.478 18:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:47.478 18:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:47.478 18:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:47.478 18:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.478 18:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.478 18:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.478 18:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.479 18:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.479 18:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.479 18:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.043 00:15:48.043 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.043 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:48.043 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.303 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.303 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.303 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.303 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.303 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.303 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:48.303 { 00:15:48.303 "cntlid": 129, 00:15:48.303 "qid": 0, 00:15:48.303 "state": "enabled", 00:15:48.303 "thread": "nvmf_tgt_poll_group_000", 00:15:48.303 "listen_address": { 00:15:48.303 "trtype": "TCP", 00:15:48.303 "adrfam": "IPv4", 00:15:48.303 "traddr": "10.0.0.2", 00:15:48.303 "trsvcid": "4420" 00:15:48.303 }, 00:15:48.303 "peer_address": { 00:15:48.303 "trtype": "TCP", 00:15:48.303 "adrfam": "IPv4", 00:15:48.303 "traddr": "10.0.0.1", 00:15:48.303 "trsvcid": "50984" 00:15:48.303 }, 00:15:48.303 "auth": { 00:15:48.303 "state": "completed", 00:15:48.303 "digest": "sha512", 00:15:48.303 "dhgroup": "ffdhe6144" 00:15:48.303 } 00:15:48.303 } 00:15:48.303 ]' 00:15:48.303 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.303 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:48.303 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.303 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:48.303 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.303 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.303 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.303 18:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.560 18:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTFlMzJiNDE4YWE3OWQ4NzM4YzAyZDkyODAyYWE0NDJkOTU2NDM4NWEzMWU1MTRmJe5BTw==: --dhchap-ctrl-secret DHHC-1:03:NDllYTlkZmM3MmQ1ZjNhMjI2NGJhNWYyNjNhNTgxNDE3YzIyMmVhM2U5NzVmMDYxNzIwMTA1ZGQzN2M3MjE1MNOdqyA=: 00:15:49.492 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.492 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:49.492 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.492 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.492 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.492 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.493 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:49.493 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:50.055 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:15:50.055 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.055 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:50.055 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:50.055 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:50.055 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.055 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.055 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.055 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.055 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.055 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.055 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.617 00:15:50.617 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.617 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.617 18:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.873 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.873 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.873 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.873 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.873 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.873 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.873 { 00:15:50.873 "cntlid": 131, 00:15:50.873 "qid": 0, 00:15:50.873 "state": "enabled", 00:15:50.873 "thread": "nvmf_tgt_poll_group_000", 00:15:50.873 "listen_address": { 00:15:50.873 "trtype": "TCP", 00:15:50.873 "adrfam": "IPv4", 00:15:50.873 "traddr": "10.0.0.2", 00:15:50.873 "trsvcid": "4420" 00:15:50.873 }, 00:15:50.873 "peer_address": { 00:15:50.873 "trtype": "TCP", 00:15:50.873 "adrfam": "IPv4", 00:15:50.873 "traddr": "10.0.0.1", 00:15:50.873 "trsvcid": "33028" 00:15:50.873 }, 00:15:50.873 "auth": { 00:15:50.873 "state": "completed", 00:15:50.873 "digest": "sha512", 00:15:50.873 "dhgroup": "ffdhe6144" 00:15:50.873 } 00:15:50.873 } 00:15:50.873 ]' 00:15:50.873 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.873 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:50.873 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.873 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:50.873 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.873 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.873 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.873 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.129 18:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YjA4YmI0YTBlNTIyNzUzYjFiNjZiMjYxYTg5MjFkYzhLPinR: --dhchap-ctrl-secret DHHC-1:02:NjJlOTYxMmYzODE0NzViYThjMWRjZjA5ZDIzYTEzMmJkNDRjMWYwNGQ0NjMzYjZiLvkyDg==: 00:15:52.061 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.061 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:52.061 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.061 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.061 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.061 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.061 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:52.061 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:52.319 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:15:52.319 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.319 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:52.319 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:52.319 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:52.319 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.319 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.319 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.319 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.319 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.319 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.319 18:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.885 00:15:52.885 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.885 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.885 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.142 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.142 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.142 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.142 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.142 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.142 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.142 { 00:15:53.142 "cntlid": 133, 00:15:53.142 "qid": 0, 00:15:53.142 "state": "enabled", 00:15:53.142 "thread": "nvmf_tgt_poll_group_000", 00:15:53.142 "listen_address": { 00:15:53.142 "trtype": "TCP", 00:15:53.142 "adrfam": "IPv4", 00:15:53.142 "traddr": "10.0.0.2", 00:15:53.142 "trsvcid": "4420" 00:15:53.142 }, 00:15:53.142 "peer_address": { 00:15:53.142 "trtype": "TCP", 00:15:53.142 "adrfam": "IPv4", 00:15:53.142 "traddr": "10.0.0.1", 00:15:53.142 "trsvcid": "33052" 00:15:53.142 }, 00:15:53.142 "auth": { 00:15:53.143 "state": "completed", 00:15:53.143 "digest": "sha512", 00:15:53.143 "dhgroup": "ffdhe6144" 00:15:53.143 } 00:15:53.143 } 00:15:53.143 ]' 00:15:53.143 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.400 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.400 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.400 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:53.400 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:53.400 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.400 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.400 18:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.658 18:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjBiOTAyZWI3MDM5MjhjYzExNzNkY2MwMTQ0YmZjZTg2NTdlMzBhMzM0MDIzY2M0x+cEuw==: --dhchap-ctrl-secret DHHC-1:01:ZmNmOWU1ZjA5ZjUxNTYyNjFkMGY0MmM1NmVhOGFkODDesNza: 00:15:54.591 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.591 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:54.591 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.591 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.591 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.591 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.591 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:54.591 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:54.849 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:15:54.849 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.849 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:54.849 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:54.849 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:54.849 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.849 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:54.849 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.849 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.849 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.849 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:54.849 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:55.413 00:15:55.413 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.413 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.413 18:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.670 18:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.670 18:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.670 18:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.670 18:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.670 18:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.670 18:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.670 { 00:15:55.670 "cntlid": 135, 00:15:55.670 "qid": 0, 00:15:55.670 "state": "enabled", 00:15:55.670 "thread": "nvmf_tgt_poll_group_000", 00:15:55.670 "listen_address": { 00:15:55.670 "trtype": "TCP", 00:15:55.670 "adrfam": "IPv4", 00:15:55.670 "traddr": "10.0.0.2", 00:15:55.670 "trsvcid": "4420" 00:15:55.670 }, 00:15:55.670 "peer_address": { 00:15:55.670 "trtype": "TCP", 00:15:55.670 "adrfam": "IPv4", 00:15:55.670 "traddr": "10.0.0.1", 00:15:55.670 "trsvcid": "33074" 00:15:55.670 }, 00:15:55.670 "auth": { 00:15:55.670 "state": "completed", 00:15:55.670 "digest": "sha512", 00:15:55.670 "dhgroup": "ffdhe6144" 00:15:55.670 } 00:15:55.670 } 00:15:55.671 ]' 00:15:55.671 18:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.671 18:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.671 18:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.930 18:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:55.930 18:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.930 18:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.930 18:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.930 18:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.188 18:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDEyNDlkZGVlMjRmY2EwYWJlM2EwMWZiMzY1MmUxMjBjYjU2ZGVlMmY2N2ZjNjFlMjdhYzBlYzIzNmMyMDRhZCsG6tQ=: 00:15:57.121 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.121 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:57.121 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.121 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.121 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.121 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.121 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.121 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:57.121 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:57.379 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:15:57.379 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.379 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:57.380 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:57.380 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:57.380 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.380 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.380 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.380 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.380 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.380 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.380 18:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.312 00:15:58.312 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.312 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.313 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.570 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.570 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.570 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.570 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.570 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.570 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.570 { 00:15:58.570 "cntlid": 137, 00:15:58.570 "qid": 0, 00:15:58.570 "state": "enabled", 00:15:58.570 "thread": "nvmf_tgt_poll_group_000", 00:15:58.570 "listen_address": { 00:15:58.570 "trtype": "TCP", 00:15:58.570 "adrfam": "IPv4", 00:15:58.570 "traddr": "10.0.0.2", 00:15:58.570 "trsvcid": "4420" 00:15:58.570 }, 00:15:58.570 "peer_address": { 00:15:58.570 "trtype": "TCP", 00:15:58.570 "adrfam": "IPv4", 00:15:58.570 "traddr": "10.0.0.1", 00:15:58.570 "trsvcid": "33094" 00:15:58.570 }, 00:15:58.570 "auth": { 00:15:58.570 "state": "completed", 00:15:58.570 "digest": "sha512", 00:15:58.570 "dhgroup": "ffdhe8192" 00:15:58.570 } 00:15:58.570 } 00:15:58.570 ]' 00:15:58.570 18:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.570 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:58.570 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.570 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:58.571 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.571 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.571 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.571 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.829 18:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTFlMzJiNDE4YWE3OWQ4NzM4YzAyZDkyODAyYWE0NDJkOTU2NDM4NWEzMWU1MTRmJe5BTw==: --dhchap-ctrl-secret DHHC-1:03:NDllYTlkZmM3MmQ1ZjNhMjI2NGJhNWYyNjNhNTgxNDE3YzIyMmVhM2U5NzVmMDYxNzIwMTA1ZGQzN2M3MjE1MNOdqyA=: 00:15:59.762 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.762 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:59.762 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.762 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.762 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.762 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.762 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:59.762 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:00.020 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:00.020 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.020 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:00.020 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:00.020 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:00.020 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.020 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.020 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.020 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.020 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.020 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.020 18:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.953 00:16:00.953 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.953 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.953 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.210 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.210 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.210 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.210 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.210 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.210 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.210 { 00:16:01.210 "cntlid": 139, 00:16:01.210 "qid": 0, 00:16:01.210 "state": "enabled", 00:16:01.210 "thread": "nvmf_tgt_poll_group_000", 00:16:01.210 "listen_address": { 00:16:01.210 "trtype": "TCP", 00:16:01.210 "adrfam": "IPv4", 00:16:01.210 "traddr": "10.0.0.2", 00:16:01.210 "trsvcid": "4420" 00:16:01.210 }, 00:16:01.210 "peer_address": { 00:16:01.211 "trtype": "TCP", 00:16:01.211 "adrfam": "IPv4", 00:16:01.211 "traddr": "10.0.0.1", 00:16:01.211 "trsvcid": "34106" 00:16:01.211 }, 00:16:01.211 "auth": { 00:16:01.211 "state": "completed", 00:16:01.211 "digest": "sha512", 00:16:01.211 "dhgroup": "ffdhe8192" 00:16:01.211 } 00:16:01.211 } 00:16:01.211 ]' 00:16:01.211 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.211 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:01.211 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.211 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:01.211 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.469 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.469 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.469 18:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.727 18:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YjA4YmI0YTBlNTIyNzUzYjFiNjZiMjYxYTg5MjFkYzhLPinR: --dhchap-ctrl-secret DHHC-1:02:NjJlOTYxMmYzODE0NzViYThjMWRjZjA5ZDIzYTEzMmJkNDRjMWYwNGQ0NjMzYjZiLvkyDg==: 00:16:02.660 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.660 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:02.660 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.660 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.660 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.660 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.660 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:02.660 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:02.933 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:02.933 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.933 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:02.933 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:02.933 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:02.933 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.933 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.933 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.933 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.933 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.933 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.933 18:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.905 00:16:03.905 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.905 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.905 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.163 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.163 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.163 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.163 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.163 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.163 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.163 { 00:16:04.163 "cntlid": 141, 00:16:04.163 "qid": 0, 00:16:04.163 "state": "enabled", 00:16:04.163 "thread": "nvmf_tgt_poll_group_000", 00:16:04.163 "listen_address": { 00:16:04.163 "trtype": "TCP", 00:16:04.163 "adrfam": "IPv4", 00:16:04.163 "traddr": "10.0.0.2", 00:16:04.163 "trsvcid": "4420" 00:16:04.163 }, 00:16:04.163 "peer_address": { 00:16:04.163 "trtype": "TCP", 00:16:04.163 "adrfam": "IPv4", 00:16:04.163 "traddr": "10.0.0.1", 00:16:04.163 "trsvcid": "34148" 00:16:04.163 }, 00:16:04.163 "auth": { 00:16:04.163 "state": "completed", 00:16:04.163 "digest": "sha512", 00:16:04.163 "dhgroup": "ffdhe8192" 00:16:04.163 } 00:16:04.163 } 00:16:04.163 ]' 00:16:04.163 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.163 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:04.163 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.163 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:04.163 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.163 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.163 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.163 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.421 18:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjBiOTAyZWI3MDM5MjhjYzExNzNkY2MwMTQ0YmZjZTg2NTdlMzBhMzM0MDIzY2M0x+cEuw==: --dhchap-ctrl-secret DHHC-1:01:ZmNmOWU1ZjA5ZjUxNTYyNjFkMGY0MmM1NmVhOGFkODDesNza: 00:16:05.354 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.354 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:05.354 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.354 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.354 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.354 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.354 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:05.354 18:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:05.612 18:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:05.612 18:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.612 18:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:05.612 18:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:05.612 18:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:05.612 18:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.612 18:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:05.612 18:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.612 18:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.612 18:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.612 18:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:05.612 18:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:06.545 00:16:06.545 18:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.545 18:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.545 18:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.802 18:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.802 18:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.802 18:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.802 18:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.802 18:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.802 18:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.802 { 00:16:06.802 "cntlid": 143, 00:16:06.802 "qid": 0, 00:16:06.802 "state": "enabled", 00:16:06.802 "thread": "nvmf_tgt_poll_group_000", 00:16:06.802 "listen_address": { 00:16:06.802 "trtype": "TCP", 00:16:06.802 "adrfam": "IPv4", 00:16:06.802 "traddr": "10.0.0.2", 00:16:06.802 "trsvcid": "4420" 00:16:06.802 }, 00:16:06.802 "peer_address": { 00:16:06.802 "trtype": "TCP", 00:16:06.802 "adrfam": "IPv4", 00:16:06.802 "traddr": "10.0.0.1", 00:16:06.802 "trsvcid": "34168" 00:16:06.802 }, 00:16:06.802 "auth": { 00:16:06.802 "state": "completed", 00:16:06.802 "digest": "sha512", 00:16:06.802 "dhgroup": "ffdhe8192" 00:16:06.802 } 00:16:06.802 } 00:16:06.802 ]' 00:16:06.802 18:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.802 18:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:06.802 18:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.802 18:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:06.802 18:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.802 18:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.802 18:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.802 18:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.060 18:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDEyNDlkZGVlMjRmY2EwYWJlM2EwMWZiMzY1MmUxMjBjYjU2ZGVlMmY2N2ZjNjFlMjdhYzBlYzIzNmMyMDRhZCsG6tQ=: 00:16:07.993 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.993 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:07.993 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.993 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.994 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.252 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:08.252 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:08.252 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:08.252 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:08.252 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:08.252 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:08.252 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:08.252 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.252 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:08.252 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:08.252 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:08.252 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.252 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.252 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.252 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.252 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.252 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.252 18:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.185 00:16:09.185 18:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.185 18:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.185 18:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.443 18:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.443 18:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.443 18:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.443 18:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.443 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.443 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.443 { 00:16:09.443 "cntlid": 145, 00:16:09.443 "qid": 0, 00:16:09.443 "state": "enabled", 00:16:09.443 "thread": "nvmf_tgt_poll_group_000", 00:16:09.443 "listen_address": { 00:16:09.443 "trtype": "TCP", 00:16:09.443 "adrfam": "IPv4", 00:16:09.443 "traddr": "10.0.0.2", 00:16:09.443 "trsvcid": "4420" 00:16:09.443 }, 00:16:09.443 "peer_address": { 00:16:09.443 "trtype": "TCP", 00:16:09.443 "adrfam": "IPv4", 00:16:09.443 "traddr": "10.0.0.1", 00:16:09.443 "trsvcid": "40682" 00:16:09.443 }, 00:16:09.443 "auth": { 00:16:09.443 "state": "completed", 00:16:09.443 "digest": "sha512", 00:16:09.443 "dhgroup": "ffdhe8192" 00:16:09.443 } 00:16:09.443 } 00:16:09.443 ]' 00:16:09.443 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.443 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.443 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.700 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:09.700 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.700 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.700 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.700 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.958 18:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:MTFlMzJiNDE4YWE3OWQ4NzM4YzAyZDkyODAyYWE0NDJkOTU2NDM4NWEzMWU1MTRmJe5BTw==: --dhchap-ctrl-secret DHHC-1:03:NDllYTlkZmM3MmQ1ZjNhMjI2NGJhNWYyNjNhNTgxNDE3YzIyMmVhM2U5NzVmMDYxNzIwMTA1ZGQzN2M3MjE1MNOdqyA=: 00:16:10.892 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.892 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:10.892 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.892 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.893 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.893 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:16:10.893 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.893 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.893 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.893 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:10.893 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:10.893 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:10.893 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:10.893 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:10.893 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:10.893 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:10.893 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:10.893 18:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:11.825 request: 00:16:11.825 { 00:16:11.825 "name": "nvme0", 00:16:11.825 "trtype": "tcp", 00:16:11.825 "traddr": "10.0.0.2", 00:16:11.825 "adrfam": "ipv4", 00:16:11.825 "trsvcid": "4420", 00:16:11.825 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:11.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:11.825 "prchk_reftag": false, 00:16:11.825 "prchk_guard": false, 00:16:11.825 "hdgst": false, 00:16:11.825 "ddgst": false, 00:16:11.825 "dhchap_key": "key2", 00:16:11.826 "method": "bdev_nvme_attach_controller", 00:16:11.826 "req_id": 1 00:16:11.826 } 00:16:11.826 Got JSON-RPC error response 00:16:11.826 response: 00:16:11.826 { 00:16:11.826 "code": -5, 00:16:11.826 "message": "Input/output error" 00:16:11.826 } 00:16:11.826 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:11.826 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:11.826 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:11.826 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:11.826 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:11.826 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.826 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.826 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.826 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.826 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.826 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.826 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.826 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:11.826 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:11.826 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:11.826 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:11.826 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:11.826 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:11.826 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:11.826 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:11.826 18:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:12.760 request: 00:16:12.760 { 00:16:12.760 "name": "nvme0", 00:16:12.760 "trtype": "tcp", 00:16:12.760 "traddr": "10.0.0.2", 00:16:12.760 "adrfam": "ipv4", 00:16:12.760 "trsvcid": "4420", 00:16:12.760 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:12.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:12.760 "prchk_reftag": false, 00:16:12.760 "prchk_guard": false, 00:16:12.760 "hdgst": false, 00:16:12.760 "ddgst": false, 00:16:12.760 "dhchap_key": "key1", 00:16:12.760 "dhchap_ctrlr_key": "ckey2", 00:16:12.760 "method": "bdev_nvme_attach_controller", 00:16:12.760 "req_id": 1 00:16:12.760 } 00:16:12.760 Got JSON-RPC error response 00:16:12.760 response: 00:16:12.760 { 00:16:12.760 "code": -5, 00:16:12.760 "message": "Input/output error" 00:16:12.760 } 00:16:12.760 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:12.760 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:12.760 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:12.760 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:12.760 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:12.760 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.760 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.760 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.760 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:16:12.760 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.760 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.760 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.760 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.760 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:12.760 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.760 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:12.760 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:12.760 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:12.760 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:12.760 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.760 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.693 request: 00:16:13.693 { 00:16:13.693 "name": "nvme0", 00:16:13.693 "trtype": "tcp", 00:16:13.693 "traddr": "10.0.0.2", 00:16:13.693 "adrfam": "ipv4", 00:16:13.693 "trsvcid": "4420", 00:16:13.693 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:13.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:13.693 "prchk_reftag": false, 00:16:13.693 "prchk_guard": false, 00:16:13.693 "hdgst": false, 00:16:13.693 "ddgst": false, 00:16:13.693 "dhchap_key": "key1", 00:16:13.693 "dhchap_ctrlr_key": "ckey1", 00:16:13.693 "method": "bdev_nvme_attach_controller", 00:16:13.693 "req_id": 1 00:16:13.693 } 00:16:13.693 Got JSON-RPC error response 00:16:13.693 response: 00:16:13.693 { 00:16:13.693 "code": -5, 00:16:13.693 "message": "Input/output error" 00:16:13.693 } 00:16:13.693 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:13.693 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:13.693 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:13.693 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:13.693 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:13.693 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.693 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.693 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.693 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3136533 00:16:13.693 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3136533 ']' 00:16:13.693 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3136533 00:16:13.693 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:13.693 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:13.693 18:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3136533 00:16:13.693 18:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:13.693 18:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:13.693 18:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3136533' 00:16:13.693 killing process with pid 3136533 00:16:13.693 18:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3136533 00:16:13.693 18:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3136533 00:16:13.693 18:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:13.693 18:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:13.693 18:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:13.693 18:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.693 18:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3159232 00:16:13.693 18:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:13.693 18:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3159232 00:16:13.693 18:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3159232 ']' 00:16:13.693 18:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.693 18:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:13.693 18:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.693 18:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:13.693 18:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.065 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:15.065 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:15.065 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:15.065 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:15.065 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.065 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.065 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:15.065 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3159232 00:16:15.065 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3159232 ']' 00:16:15.065 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.065 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:15.065 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.065 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:15.065 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.065 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:15.065 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:15.065 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:16:15.065 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.065 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.324 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.324 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:16:15.324 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.324 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:15.324 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:15.324 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:15.324 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.324 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:15.324 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.324 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.324 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.324 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:15.324 18:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:16.256 00:16:16.256 18:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.257 18:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.257 18:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.515 18:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.515 18:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.515 18:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.515 18:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.515 18:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.515 18:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.515 { 00:16:16.515 "cntlid": 1, 00:16:16.515 "qid": 0, 00:16:16.515 "state": "enabled", 00:16:16.515 "thread": "nvmf_tgt_poll_group_000", 00:16:16.515 "listen_address": { 00:16:16.515 "trtype": "TCP", 00:16:16.515 "adrfam": "IPv4", 00:16:16.515 "traddr": "10.0.0.2", 00:16:16.515 "trsvcid": "4420" 00:16:16.515 }, 00:16:16.515 "peer_address": { 00:16:16.515 "trtype": "TCP", 00:16:16.515 "adrfam": "IPv4", 00:16:16.515 "traddr": "10.0.0.1", 00:16:16.515 "trsvcid": "40732" 00:16:16.515 }, 00:16:16.515 "auth": { 00:16:16.515 "state": "completed", 00:16:16.515 "digest": "sha512", 00:16:16.515 "dhgroup": "ffdhe8192" 00:16:16.515 } 00:16:16.515 } 00:16:16.515 ]' 00:16:16.515 18:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.515 18:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:16.515 18:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.515 18:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:16.515 18:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.515 18:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.515 18:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.515 18:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.772 18:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDEyNDlkZGVlMjRmY2EwYWJlM2EwMWZiMzY1MmUxMjBjYjU2ZGVlMmY2N2ZjNjFlMjdhYzBlYzIzNmMyMDRhZCsG6tQ=: 00:16:17.705 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.705 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:17.705 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.705 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.705 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.705 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:17.705 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.705 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.705 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.705 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:17.705 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:17.963 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.963 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:17.963 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.963 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:17.963 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:17.963 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:17.963 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:17.963 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.963 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:18.221 request: 00:16:18.221 { 00:16:18.221 "name": "nvme0", 00:16:18.221 "trtype": "tcp", 00:16:18.221 "traddr": "10.0.0.2", 00:16:18.221 "adrfam": "ipv4", 00:16:18.221 "trsvcid": "4420", 00:16:18.221 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:18.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:18.221 "prchk_reftag": false, 00:16:18.221 "prchk_guard": false, 00:16:18.221 "hdgst": false, 00:16:18.221 "ddgst": false, 00:16:18.221 "dhchap_key": "key3", 00:16:18.221 "method": "bdev_nvme_attach_controller", 00:16:18.221 "req_id": 1 00:16:18.221 } 00:16:18.221 Got JSON-RPC error response 00:16:18.221 response: 00:16:18.221 { 00:16:18.221 "code": -5, 00:16:18.221 "message": "Input/output error" 00:16:18.221 } 00:16:18.221 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:18.221 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:18.221 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:18.221 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:18.221 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:16:18.221 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:16:18.221 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:18.221 18:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:18.479 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:18.479 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:18.479 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:18.479 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:18.479 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:18.479 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:18.479 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:18.479 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:18.479 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:18.736 request: 00:16:18.736 { 00:16:18.736 "name": "nvme0", 00:16:18.736 "trtype": "tcp", 00:16:18.736 "traddr": "10.0.0.2", 00:16:18.736 "adrfam": "ipv4", 00:16:18.736 "trsvcid": "4420", 00:16:18.736 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:18.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:18.736 "prchk_reftag": false, 00:16:18.736 "prchk_guard": false, 00:16:18.736 "hdgst": false, 00:16:18.736 "ddgst": false, 00:16:18.736 "dhchap_key": "key3", 00:16:18.736 "method": "bdev_nvme_attach_controller", 00:16:18.736 "req_id": 1 00:16:18.736 } 00:16:18.736 Got JSON-RPC error response 00:16:18.736 response: 00:16:18.736 { 00:16:18.736 "code": -5, 00:16:18.736 "message": "Input/output error" 00:16:18.736 } 00:16:18.737 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:18.737 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:18.737 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:18.737 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:18.737 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:18.737 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:16:18.737 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:18.737 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:18.737 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:18.737 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:18.995 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:18.995 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.995 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.995 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.995 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:18.995 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.995 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.995 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.995 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:18.995 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:18.995 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:18.995 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:18.995 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:18.995 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:18.995 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:18.995 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:18.995 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:19.268 request: 00:16:19.268 { 00:16:19.268 "name": "nvme0", 00:16:19.268 "trtype": "tcp", 00:16:19.268 "traddr": "10.0.0.2", 00:16:19.268 "adrfam": "ipv4", 00:16:19.268 "trsvcid": "4420", 00:16:19.268 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:19.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:19.268 "prchk_reftag": false, 00:16:19.268 "prchk_guard": false, 00:16:19.268 "hdgst": false, 00:16:19.268 "ddgst": false, 00:16:19.268 "dhchap_key": "key0", 00:16:19.268 "dhchap_ctrlr_key": "key1", 00:16:19.268 "method": "bdev_nvme_attach_controller", 00:16:19.268 "req_id": 1 00:16:19.268 } 00:16:19.268 Got JSON-RPC error response 00:16:19.268 response: 00:16:19.268 { 00:16:19.268 "code": -5, 00:16:19.268 "message": "Input/output error" 00:16:19.268 } 00:16:19.268 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:19.268 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:19.268 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:19.268 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:19.268 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:19.268 18:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:19.526 00:16:19.526 18:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:16:19.526 18:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:16:19.526 18:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.784 18:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.784 18:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.784 18:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.042 18:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:16:20.042 18:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:16:20.042 18:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3136688 00:16:20.042 18:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3136688 ']' 00:16:20.042 18:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3136688 00:16:20.042 18:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:20.042 18:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:20.042 18:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3136688 00:16:20.042 18:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:20.042 18:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:20.042 18:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3136688' 00:16:20.042 killing process with pid 3136688 00:16:20.042 18:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3136688 00:16:20.042 18:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3136688 00:16:20.607 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:20.607 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:20.607 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:16:20.607 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:20.607 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:16:20.607 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:20.607 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:20.607 rmmod nvme_tcp 00:16:20.607 rmmod nvme_fabrics 00:16:20.607 rmmod nvme_keyring 00:16:20.607 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:20.607 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:16:20.607 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:16:20.607 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3159232 ']' 00:16:20.607 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3159232 00:16:20.607 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3159232 ']' 00:16:20.607 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3159232 00:16:20.607 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:20.607 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:20.607 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3159232 00:16:20.607 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:20.607 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:20.607 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3159232' 00:16:20.607 killing process with pid 3159232 00:16:20.607 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3159232 00:16:20.607 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3159232 00:16:21.174 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:21.174 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:21.174 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:21.174 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:21.174 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:21.174 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.174 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:21.174 18:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.115 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:23.115 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.TJh /tmp/spdk.key-sha256.g0i /tmp/spdk.key-sha384.4Sg /tmp/spdk.key-sha512.GhU /tmp/spdk.key-sha512.sIA /tmp/spdk.key-sha384.bbU /tmp/spdk.key-sha256.fco '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:16:23.115 00:16:23.115 real 3m10.170s 00:16:23.115 user 7m21.839s 00:16:23.115 sys 0m25.172s 00:16:23.115 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:23.115 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.115 ************************************ 00:16:23.115 END TEST nvmf_auth_target 00:16:23.115 ************************************ 00:16:23.115 18:59:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:16:23.115 18:59:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:23.115 18:59:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:23.115 18:59:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:23.115 18:59:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:23.115 ************************************ 00:16:23.115 START TEST nvmf_bdevio_no_huge 00:16:23.115 ************************************ 00:16:23.115 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:23.115 * Looking for test storage... 00:16:23.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:23.115 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:23.115 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:23.115 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.115 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.115 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.115 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.115 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:16:23.116 18:59:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:25.024 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:25.024 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:25.024 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:25.025 Found net devices under 0000:09:00.0: cvl_0_0 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:25.025 Found net devices under 0000:09:00.1: cvl_0_1 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:25.025 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:25.282 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:25.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:16:25.283 00:16:25.283 --- 10.0.0.2 ping statistics --- 00:16:25.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.283 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:25.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:16:25.283 00:16:25.283 --- 10.0.0.1 ping statistics --- 00:16:25.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.283 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3162016 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3162016 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 3162016 ']' 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:25.283 18:59:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:25.283 [2024-07-24 18:59:02.739380] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:16:25.283 [2024-07-24 18:59:02.739494] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:25.283 [2024-07-24 18:59:02.812537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:25.540 [2024-07-24 18:59:02.935079] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.540 [2024-07-24 18:59:02.935159] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.540 [2024-07-24 18:59:02.935176] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.540 [2024-07-24 18:59:02.935190] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.540 [2024-07-24 18:59:02.935201] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.541 [2024-07-24 18:59:02.935304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:25.541 [2024-07-24 18:59:02.935357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:25.541 [2024-07-24 18:59:02.935416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:25.541 [2024-07-24 18:59:02.935419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:26.104 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:26.104 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:16:26.104 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:26.104 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:26.104 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:26.104 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.104 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:26.104 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.104 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:26.104 [2024-07-24 18:59:03.698864] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:26.361 Malloc0 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:26.361 [2024-07-24 18:59:03.737296] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:26.361 { 00:16:26.361 "params": { 00:16:26.361 "name": "Nvme$subsystem", 00:16:26.361 "trtype": "$TEST_TRANSPORT", 00:16:26.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:26.361 "adrfam": "ipv4", 00:16:26.361 "trsvcid": "$NVMF_PORT", 00:16:26.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:26.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:26.361 "hdgst": ${hdgst:-false}, 00:16:26.361 "ddgst": ${ddgst:-false} 00:16:26.361 }, 00:16:26.361 "method": "bdev_nvme_attach_controller" 00:16:26.361 } 00:16:26.361 EOF 00:16:26.361 )") 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:16:26.361 18:59:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:26.361 "params": { 00:16:26.361 "name": "Nvme1", 00:16:26.361 "trtype": "tcp", 00:16:26.361 "traddr": "10.0.0.2", 00:16:26.361 "adrfam": "ipv4", 00:16:26.361 "trsvcid": "4420", 00:16:26.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:26.361 "hdgst": false, 00:16:26.361 "ddgst": false 00:16:26.361 }, 00:16:26.361 "method": "bdev_nvme_attach_controller" 00:16:26.361 }' 00:16:26.361 [2024-07-24 18:59:03.782635] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:16:26.361 [2024-07-24 18:59:03.782725] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3162170 ] 00:16:26.361 [2024-07-24 18:59:03.849863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:26.618 [2024-07-24 18:59:03.967122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.618 [2024-07-24 18:59:03.967150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.618 [2024-07-24 18:59:03.967153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.876 I/O targets: 00:16:26.876 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:26.876 00:16:26.876 00:16:26.876 CUnit - A unit testing framework for C - Version 2.1-3 00:16:26.876 http://cunit.sourceforge.net/ 00:16:26.876 00:16:26.876 00:16:26.876 Suite: bdevio tests on: Nvme1n1 00:16:26.876 Test: blockdev write read block ...passed 00:16:26.876 Test: blockdev write zeroes read block ...passed 00:16:26.876 Test: blockdev write zeroes read no split ...passed 00:16:26.876 Test: blockdev write zeroes read split ...passed 00:16:26.876 Test: blockdev write zeroes read split partial ...passed 00:16:26.876 Test: blockdev reset ...[2024-07-24 18:59:04.464060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:26.876 [2024-07-24 18:59:04.464176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1213fb0 (9): Bad file descriptor 00:16:27.132 [2024-07-24 18:59:04.561905] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:27.132 passed 00:16:27.132 Test: blockdev write read 8 blocks ...passed 00:16:27.132 Test: blockdev write read size > 128k ...passed 00:16:27.132 Test: blockdev write read invalid size ...passed 00:16:27.132 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:27.132 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:27.133 Test: blockdev write read max offset ...passed 00:16:27.133 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:27.133 Test: blockdev writev readv 8 blocks ...passed 00:16:27.390 Test: blockdev writev readv 30 x 1block ...passed 00:16:27.390 Test: blockdev writev readv block ...passed 00:16:27.390 Test: blockdev writev readv size > 128k ...passed 00:16:27.390 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:27.390 Test: blockdev comparev and writev ...[2024-07-24 18:59:04.819915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:27.390 [2024-07-24 18:59:04.819951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.390 [2024-07-24 18:59:04.819975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:27.390 [2024-07-24 18:59:04.819993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:27.390 [2024-07-24 18:59:04.820386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:27.390 [2024-07-24 18:59:04.820410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:27.390 [2024-07-24 18:59:04.820431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:27.390 [2024-07-24 18:59:04.820447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:27.390 [2024-07-24 18:59:04.820801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:27.390 [2024-07-24 18:59:04.820824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:27.390 [2024-07-24 18:59:04.820844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:27.390 [2024-07-24 18:59:04.820860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:27.390 [2024-07-24 18:59:04.821256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:27.390 [2024-07-24 18:59:04.821279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:27.391 [2024-07-24 18:59:04.821300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:27.391 [2024-07-24 18:59:04.821323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:27.391 passed 00:16:27.391 Test: blockdev nvme passthru rw ...passed 00:16:27.391 Test: blockdev nvme passthru vendor specific ...[2024-07-24 18:59:04.903455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:27.391 [2024-07-24 18:59:04.903482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:27.391 [2024-07-24 18:59:04.903692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:27.391 [2024-07-24 18:59:04.903716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:27.391 [2024-07-24 18:59:04.903920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:27.391 [2024-07-24 18:59:04.903943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:27.391 [2024-07-24 18:59:04.904150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:27.391 [2024-07-24 18:59:04.904173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:27.391 passed 00:16:27.391 Test: blockdev nvme admin passthru ...passed 00:16:27.391 Test: blockdev copy ...passed 00:16:27.391 00:16:27.391 Run Summary: Type Total Ran Passed Failed Inactive 00:16:27.391 suites 1 1 n/a 0 0 00:16:27.391 tests 23 23 23 0 0 00:16:27.391 asserts 152 152 152 0 n/a 00:16:27.391 00:16:27.391 Elapsed time = 1.244 seconds 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:27.956 rmmod nvme_tcp 00:16:27.956 rmmod nvme_fabrics 00:16:27.956 rmmod nvme_keyring 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3162016 ']' 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3162016 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 3162016 ']' 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 3162016 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3162016 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3162016' 00:16:27.956 killing process with pid 3162016 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 3162016 00:16:27.956 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 3162016 00:16:28.522 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:28.522 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:28.522 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:28.522 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:28.522 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:28.522 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.522 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.522 18:59:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:30.426 00:16:30.426 real 0m7.308s 00:16:30.426 user 0m14.771s 00:16:30.426 sys 0m2.479s 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:30.426 ************************************ 00:16:30.426 END TEST nvmf_bdevio_no_huge 00:16:30.426 ************************************ 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:30.426 ************************************ 00:16:30.426 START TEST nvmf_tls 00:16:30.426 ************************************ 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:30.426 * Looking for test storage... 00:16:30.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.426 18:59:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.426 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.427 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.427 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:30.427 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:30.427 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:16:30.427 18:59:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:32.328 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:32.328 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:32.328 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:32.329 Found net devices under 0000:09:00.0: cvl_0_0 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:32.329 Found net devices under 0000:09:00.1: cvl_0_1 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:32.329 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:32.587 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:32.587 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:32.587 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:32.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:32.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:16:32.587 00:16:32.587 --- 10.0.0.2 ping statistics --- 00:16:32.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.587 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:16:32.587 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:32.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:32.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:16:32.587 00:16:32.587 --- 10.0.0.1 ping statistics --- 00:16:32.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.588 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:16:32.588 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:32.588 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:16:32.588 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:32.588 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:32.588 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:32.588 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:32.588 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:32.588 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:32.588 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:32.588 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:32.588 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:32.588 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:32.588 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:32.588 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3164248 00:16:32.588 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:32.588 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3164248 00:16:32.588 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3164248 ']' 00:16:32.588 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.588 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:32.588 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.588 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:32.588 18:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:32.588 [2024-07-24 18:59:10.029854] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:16:32.588 [2024-07-24 18:59:10.029971] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.588 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.588 [2024-07-24 18:59:10.101436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.846 [2024-07-24 18:59:10.215392] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:32.846 [2024-07-24 18:59:10.215446] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:32.846 [2024-07-24 18:59:10.215460] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:32.846 [2024-07-24 18:59:10.215471] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:32.846 [2024-07-24 18:59:10.215481] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:32.846 [2024-07-24 18:59:10.215509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.846 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:32.846 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:32.846 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:32.846 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:32.846 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:32.846 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.846 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:16:32.846 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:33.104 true 00:16:33.104 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:33.104 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:16:33.362 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:16:33.362 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:16:33.362 18:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:33.619 18:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:33.619 18:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:16:33.877 18:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:16:33.877 18:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:16:33.877 18:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:34.136 18:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:34.136 18:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:16:34.393 18:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:16:34.393 18:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:16:34.393 18:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:34.393 18:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:16:34.651 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:16:34.651 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:16:34.651 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:34.910 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:34.910 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:16:35.167 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:16:35.167 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:16:35.167 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:35.425 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:35.425 18:59:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.MOA5lfkFvl 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Cae0jIXGYL 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.MOA5lfkFvl 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Cae0jIXGYL 00:16:35.683 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:35.940 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:16:36.198 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.MOA5lfkFvl 00:16:36.198 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.MOA5lfkFvl 00:16:36.198 18:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:36.456 [2024-07-24 18:59:14.026350] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.456 18:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:36.714 18:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:36.972 [2024-07-24 18:59:14.519743] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:36.972 [2024-07-24 18:59:14.519973] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.972 18:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:37.229 malloc0 00:16:37.486 18:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:37.743 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MOA5lfkFvl 00:16:38.001 [2024-07-24 18:59:15.366313] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:38.001 18:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.MOA5lfkFvl 00:16:38.001 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.046 Initializing NVMe Controllers 00:16:48.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:48.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:48.046 Initialization complete. Launching workers. 00:16:48.046 ======================================================== 00:16:48.046 Latency(us) 00:16:48.046 Device Information : IOPS MiB/s Average min max 00:16:48.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7441.07 29.07 8603.70 1136.32 10612.69 00:16:48.046 ======================================================== 00:16:48.046 Total : 7441.07 29.07 8603.70 1136.32 10612.69 00:16:48.046 00:16:48.046 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MOA5lfkFvl 00:16:48.046 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:48.046 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:48.046 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:48.046 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MOA5lfkFvl' 00:16:48.046 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:48.046 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3166134 00:16:48.046 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:48.046 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:48.046 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3166134 /var/tmp/bdevperf.sock 00:16:48.046 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3166134 ']' 00:16:48.046 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:48.046 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:48.046 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:48.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:48.046 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:48.046 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:48.046 [2024-07-24 18:59:25.533501] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:16:48.046 [2024-07-24 18:59:25.533572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3166134 ] 00:16:48.046 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.046 [2024-07-24 18:59:25.589971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.303 [2024-07-24 18:59:25.696062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.303 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:48.303 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:48.303 18:59:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MOA5lfkFvl 00:16:48.560 [2024-07-24 18:59:26.080554] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:48.561 [2024-07-24 18:59:26.080650] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:48.561 TLSTESTn1 00:16:48.818 18:59:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:48.818 Running I/O for 10 seconds... 00:16:58.781 00:16:58.781 Latency(us) 00:16:58.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.781 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:58.781 Verification LBA range: start 0x0 length 0x2000 00:16:58.781 TLSTESTn1 : 10.04 2650.01 10.35 0.00 0.00 48186.79 7815.77 81167.55 00:16:58.781 =================================================================================================================== 00:16:58.781 Total : 2650.01 10.35 0.00 0.00 48186.79 7815.77 81167.55 00:16:58.781 0 00:16:58.781 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:58.781 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 3166134 00:16:58.781 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3166134 ']' 00:16:58.781 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3166134 00:16:58.781 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:58.781 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:58.781 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3166134 00:16:59.039 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:59.039 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:59.039 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3166134' 00:16:59.039 killing process with pid 3166134 00:16:59.039 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3166134 00:16:59.039 Received shutdown signal, test time was about 10.000000 seconds 00:16:59.039 00:16:59.039 Latency(us) 00:16:59.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.039 =================================================================================================================== 00:16:59.039 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:59.039 [2024-07-24 18:59:36.397567] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:59.039 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3166134 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Cae0jIXGYL 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Cae0jIXGYL 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Cae0jIXGYL 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Cae0jIXGYL' 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3167447 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3167447 /var/tmp/bdevperf.sock 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3167447 ']' 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:59.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:59.297 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:59.297 [2024-07-24 18:59:36.706597] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:16:59.297 [2024-07-24 18:59:36.706689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3167447 ] 00:16:59.298 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.298 [2024-07-24 18:59:36.765018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.298 [2024-07-24 18:59:36.872566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.555 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:59.555 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:59.555 18:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Cae0jIXGYL 00:16:59.813 [2024-07-24 18:59:37.252379] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:59.813 [2024-07-24 18:59:37.252521] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:59.814 [2024-07-24 18:59:37.257902] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:59.814 [2024-07-24 18:59:37.258398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172af90 (107): Transport endpoint is not connected 00:16:59.814 [2024-07-24 18:59:37.259376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172af90 (9): Bad file descriptor 00:16:59.814 [2024-07-24 18:59:37.260374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:59.814 [2024-07-24 18:59:37.260394] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:59.814 [2024-07-24 18:59:37.260411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:59.814 request: 00:16:59.814 { 00:16:59.814 "name": "TLSTEST", 00:16:59.814 "trtype": "tcp", 00:16:59.814 "traddr": "10.0.0.2", 00:16:59.814 "adrfam": "ipv4", 00:16:59.814 "trsvcid": "4420", 00:16:59.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.814 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:59.814 "prchk_reftag": false, 00:16:59.814 "prchk_guard": false, 00:16:59.814 "hdgst": false, 00:16:59.814 "ddgst": false, 00:16:59.814 "psk": "/tmp/tmp.Cae0jIXGYL", 00:16:59.814 "method": "bdev_nvme_attach_controller", 00:16:59.814 "req_id": 1 00:16:59.814 } 00:16:59.814 Got JSON-RPC error response 00:16:59.814 response: 00:16:59.814 { 00:16:59.814 "code": -5, 00:16:59.814 "message": "Input/output error" 00:16:59.814 } 00:16:59.814 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3167447 00:16:59.814 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3167447 ']' 00:16:59.814 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3167447 00:16:59.814 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:59.814 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:59.814 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3167447 00:16:59.814 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:59.814 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:59.814 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3167447' 00:16:59.814 killing process with pid 3167447 00:16:59.814 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3167447 00:16:59.814 Received shutdown signal, test time was about 10.000000 seconds 00:16:59.814 00:16:59.814 Latency(us) 00:16:59.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.814 =================================================================================================================== 00:16:59.814 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:59.814 [2024-07-24 18:59:37.307589] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:59.814 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3167447 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MOA5lfkFvl 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MOA5lfkFvl 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MOA5lfkFvl 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MOA5lfkFvl' 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3167515 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3167515 /var/tmp/bdevperf.sock 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3167515 ']' 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:00.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:00.072 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:00.072 [2024-07-24 18:59:37.608802] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:17:00.072 [2024-07-24 18:59:37.608901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3167515 ] 00:17:00.072 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.072 [2024-07-24 18:59:37.667568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.329 [2024-07-24 18:59:37.772819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.329 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:00.329 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:00.329 18:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.MOA5lfkFvl 00:17:00.587 [2024-07-24 18:59:38.110566] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:00.587 [2024-07-24 18:59:38.110686] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:00.587 [2024-07-24 18:59:38.117162] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:00.587 [2024-07-24 18:59:38.117194] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:00.587 [2024-07-24 18:59:38.117253] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:00.587 [2024-07-24 18:59:38.117520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e5f90 (107): Transport endpoint is not connected 00:17:00.587 [2024-07-24 18:59:38.118509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e5f90 (9): Bad file descriptor 00:17:00.587 [2024-07-24 18:59:38.119508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:00.587 [2024-07-24 18:59:38.119530] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:00.587 [2024-07-24 18:59:38.119553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:00.587 request: 00:17:00.587 { 00:17:00.587 "name": "TLSTEST", 00:17:00.587 "trtype": "tcp", 00:17:00.587 "traddr": "10.0.0.2", 00:17:00.587 "adrfam": "ipv4", 00:17:00.587 "trsvcid": "4420", 00:17:00.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:00.587 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:00.587 "prchk_reftag": false, 00:17:00.587 "prchk_guard": false, 00:17:00.587 "hdgst": false, 00:17:00.587 "ddgst": false, 00:17:00.587 "psk": "/tmp/tmp.MOA5lfkFvl", 00:17:00.587 "method": "bdev_nvme_attach_controller", 00:17:00.587 "req_id": 1 00:17:00.587 } 00:17:00.587 Got JSON-RPC error response 00:17:00.587 response: 00:17:00.587 { 00:17:00.587 "code": -5, 00:17:00.587 "message": "Input/output error" 00:17:00.587 } 00:17:00.587 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3167515 00:17:00.587 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3167515 ']' 00:17:00.587 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3167515 00:17:00.587 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:00.587 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:00.587 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3167515 00:17:00.587 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:00.587 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:00.587 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3167515' 00:17:00.587 killing process with pid 3167515 00:17:00.587 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3167515 00:17:00.587 Received shutdown signal, test time was about 10.000000 seconds 00:17:00.587 00:17:00.587 Latency(us) 00:17:00.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.587 =================================================================================================================== 00:17:00.587 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:00.587 [2024-07-24 18:59:38.161427] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:00.587 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3167515 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MOA5lfkFvl 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MOA5lfkFvl 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MOA5lfkFvl 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MOA5lfkFvl' 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3167609 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3167609 /var/tmp/bdevperf.sock 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3167609 ']' 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:00.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:00.845 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:00.845 [2024-07-24 18:59:38.443145] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:17:00.845 [2024-07-24 18:59:38.443220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3167609 ] 00:17:01.102 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.102 [2024-07-24 18:59:38.500502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.103 [2024-07-24 18:59:38.605498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.360 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:01.360 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:01.360 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MOA5lfkFvl 00:17:01.360 [2024-07-24 18:59:38.937727] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:01.360 [2024-07-24 18:59:38.937840] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:01.360 [2024-07-24 18:59:38.949326] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:01.360 [2024-07-24 18:59:38.949360] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:01.360 [2024-07-24 18:59:38.949397] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:01.360 [2024-07-24 18:59:38.949630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218ff90 (107): Transport endpoint is not connected 00:17:01.360 [2024-07-24 18:59:38.950616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218ff90 (9): Bad file descriptor 00:17:01.360 [2024-07-24 18:59:38.951614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:01.360 [2024-07-24 18:59:38.951632] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:01.360 [2024-07-24 18:59:38.951649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:01.360 request: 00:17:01.360 { 00:17:01.360 "name": "TLSTEST", 00:17:01.360 "trtype": "tcp", 00:17:01.360 "traddr": "10.0.0.2", 00:17:01.360 "adrfam": "ipv4", 00:17:01.360 "trsvcid": "4420", 00:17:01.360 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:01.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:01.360 "prchk_reftag": false, 00:17:01.360 "prchk_guard": false, 00:17:01.360 "hdgst": false, 00:17:01.360 "ddgst": false, 00:17:01.360 "psk": "/tmp/tmp.MOA5lfkFvl", 00:17:01.360 "method": "bdev_nvme_attach_controller", 00:17:01.360 "req_id": 1 00:17:01.360 } 00:17:01.360 Got JSON-RPC error response 00:17:01.360 response: 00:17:01.360 { 00:17:01.360 "code": -5, 00:17:01.360 "message": "Input/output error" 00:17:01.360 } 00:17:01.618 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3167609 00:17:01.618 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3167609 ']' 00:17:01.618 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3167609 00:17:01.618 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:01.618 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:01.618 18:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3167609 00:17:01.618 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:01.618 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:01.618 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3167609' 00:17:01.618 killing process with pid 3167609 00:17:01.618 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3167609 00:17:01.618 Received shutdown signal, test time was about 10.000000 seconds 00:17:01.618 00:17:01.618 Latency(us) 00:17:01.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.618 =================================================================================================================== 00:17:01.618 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:01.618 [2024-07-24 18:59:39.002318] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:01.618 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3167609 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3167741 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3167741 /var/tmp/bdevperf.sock 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3167741 ']' 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:01.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:01.877 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:01.877 [2024-07-24 18:59:39.312778] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:17:01.877 [2024-07-24 18:59:39.312874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3167741 ] 00:17:01.877 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.877 [2024-07-24 18:59:39.369275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.877 [2024-07-24 18:59:39.474686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.136 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:02.136 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:02.136 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:02.394 [2024-07-24 18:59:39.811675] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:02.394 [2024-07-24 18:59:39.813243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf32770 (9): Bad file descriptor 00:17:02.394 [2024-07-24 18:59:39.814239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:02.394 [2024-07-24 18:59:39.814260] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:02.394 [2024-07-24 18:59:39.814278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:02.394 request: 00:17:02.394 { 00:17:02.394 "name": "TLSTEST", 00:17:02.394 "trtype": "tcp", 00:17:02.394 "traddr": "10.0.0.2", 00:17:02.394 "adrfam": "ipv4", 00:17:02.394 "trsvcid": "4420", 00:17:02.394 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.394 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:02.394 "prchk_reftag": false, 00:17:02.394 "prchk_guard": false, 00:17:02.394 "hdgst": false, 00:17:02.394 "ddgst": false, 00:17:02.394 "method": "bdev_nvme_attach_controller", 00:17:02.394 "req_id": 1 00:17:02.394 } 00:17:02.394 Got JSON-RPC error response 00:17:02.394 response: 00:17:02.394 { 00:17:02.394 "code": -5, 00:17:02.394 "message": "Input/output error" 00:17:02.394 } 00:17:02.394 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3167741 00:17:02.394 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3167741 ']' 00:17:02.394 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3167741 00:17:02.394 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:02.394 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:02.394 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3167741 00:17:02.394 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:02.394 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:02.394 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3167741' 00:17:02.394 killing process with pid 3167741 00:17:02.394 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3167741 00:17:02.394 Received shutdown signal, test time was about 10.000000 seconds 00:17:02.394 00:17:02.394 Latency(us) 00:17:02.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.394 =================================================================================================================== 00:17:02.394 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:02.394 18:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3167741 00:17:02.651 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:02.651 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:02.651 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:02.651 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:02.651 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:02.651 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 3164248 00:17:02.651 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3164248 ']' 00:17:02.651 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3164248 00:17:02.651 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:02.652 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:02.652 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3164248 00:17:02.652 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:02.652 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:02.652 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3164248' 00:17:02.652 killing process with pid 3164248 00:17:02.652 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3164248 00:17:02.652 [2024-07-24 18:59:40.136766] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:02.652 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3164248 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.864rhk5WtD 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.864rhk5WtD 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3167892 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3167892 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3167892 ']' 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:02.910 18:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:03.169 [2024-07-24 18:59:40.545904] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:17:03.169 [2024-07-24 18:59:40.546004] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.169 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.169 [2024-07-24 18:59:40.612158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.169 [2024-07-24 18:59:40.728701] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.169 [2024-07-24 18:59:40.728762] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.169 [2024-07-24 18:59:40.728778] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.169 [2024-07-24 18:59:40.728791] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.169 [2024-07-24 18:59:40.728802] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.169 [2024-07-24 18:59:40.728832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.102 18:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:04.102 18:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:04.102 18:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:04.102 18:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:04.102 18:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:04.102 18:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.102 18:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.864rhk5WtD 00:17:04.102 18:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.864rhk5WtD 00:17:04.103 18:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:04.360 [2024-07-24 18:59:41.719258] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.360 18:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:04.619 18:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:04.619 [2024-07-24 18:59:42.208635] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:04.619 [2024-07-24 18:59:42.208871] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.877 18:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:04.877 malloc0 00:17:05.134 18:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:05.135 18:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.864rhk5WtD 00:17:05.392 [2024-07-24 18:59:42.962677] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:05.392 18:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.864rhk5WtD 00:17:05.392 18:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:05.392 18:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:05.392 18:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:05.392 18:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.864rhk5WtD' 00:17:05.392 18:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:05.392 18:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3168186 00:17:05.392 18:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:05.392 18:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:05.392 18:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3168186 /var/tmp/bdevperf.sock 00:17:05.392 18:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3168186 ']' 00:17:05.392 18:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:05.392 18:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:05.392 18:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:05.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:05.392 18:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:05.392 18:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:05.655 [2024-07-24 18:59:43.019855] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:17:05.656 [2024-07-24 18:59:43.019933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3168186 ] 00:17:05.656 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.656 [2024-07-24 18:59:43.077674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.656 [2024-07-24 18:59:43.188121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.916 18:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:05.916 18:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:05.916 18:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.864rhk5WtD 00:17:06.173 [2024-07-24 18:59:43.527134] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:06.173 [2024-07-24 18:59:43.527263] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:06.173 TLSTESTn1 00:17:06.173 18:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:06.173 Running I/O for 10 seconds... 00:17:18.369 00:17:18.369 Latency(us) 00:17:18.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.369 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:18.369 Verification LBA range: start 0x0 length 0x2000 00:17:18.369 TLSTESTn1 : 10.04 2903.99 11.34 0.00 0.00 43961.12 11505.21 72623.60 00:17:18.369 =================================================================================================================== 00:17:18.369 Total : 2903.99 11.34 0.00 0.00 43961.12 11505.21 72623.60 00:17:18.369 0 00:17:18.369 18:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:18.369 18:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 3168186 00:17:18.369 18:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3168186 ']' 00:17:18.369 18:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3168186 00:17:18.369 18:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:18.369 18:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:18.369 18:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3168186 00:17:18.369 18:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:18.369 18:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:18.369 18:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3168186' 00:17:18.369 killing process with pid 3168186 00:17:18.369 18:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3168186 00:17:18.369 Received shutdown signal, test time was about 10.000000 seconds 00:17:18.369 00:17:18.369 Latency(us) 00:17:18.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.369 =================================================================================================================== 00:17:18.369 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:18.369 [2024-07-24 18:59:53.850264] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:18.369 18:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3168186 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.864rhk5WtD 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.864rhk5WtD 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.864rhk5WtD 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.864rhk5WtD 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.864rhk5WtD' 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3169498 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3169498 /var/tmp/bdevperf.sock 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3169498 ']' 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.369 [2024-07-24 18:59:54.174308] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:17:18.369 [2024-07-24 18:59:54.174390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3169498 ] 00:17:18.369 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.369 [2024-07-24 18:59:54.231727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.369 [2024-07-24 18:59:54.334040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.864rhk5WtD 00:17:18.369 [2024-07-24 18:59:54.682907] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:18.369 [2024-07-24 18:59:54.682988] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:18.369 [2024-07-24 18:59:54.683004] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.864rhk5WtD 00:17:18.369 request: 00:17:18.369 { 00:17:18.369 "name": "TLSTEST", 00:17:18.369 "trtype": "tcp", 00:17:18.369 "traddr": "10.0.0.2", 00:17:18.369 "adrfam": "ipv4", 00:17:18.369 "trsvcid": "4420", 00:17:18.369 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.369 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:18.369 "prchk_reftag": false, 00:17:18.369 "prchk_guard": false, 00:17:18.369 "hdgst": false, 00:17:18.369 "ddgst": false, 00:17:18.369 "psk": "/tmp/tmp.864rhk5WtD", 00:17:18.369 "method": "bdev_nvme_attach_controller", 00:17:18.369 "req_id": 1 00:17:18.369 } 00:17:18.369 Got JSON-RPC error response 00:17:18.369 response: 00:17:18.369 { 00:17:18.369 "code": -1, 00:17:18.369 "message": "Operation not permitted" 00:17:18.369 } 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3169498 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3169498 ']' 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3169498 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3169498 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3169498' 00:17:18.369 killing process with pid 3169498 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3169498 00:17:18.369 Received shutdown signal, test time was about 10.000000 seconds 00:17:18.369 00:17:18.369 Latency(us) 00:17:18.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.369 =================================================================================================================== 00:17:18.369 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3169498 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 3167892 00:17:18.369 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3167892 ']' 00:17:18.370 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3167892 00:17:18.370 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:18.370 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:18.370 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3167892 00:17:18.370 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:18.370 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:18.370 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3167892' 00:17:18.370 killing process with pid 3167892 00:17:18.370 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3167892 00:17:18.370 [2024-07-24 18:59:54.987659] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:18.370 18:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3167892 00:17:18.370 18:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:18.370 18:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:18.370 18:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:18.370 18:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.370 18:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3169643 00:17:18.370 18:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:18.370 18:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3169643 00:17:18.370 18:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3169643 ']' 00:17:18.370 18:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.370 18:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:18.370 18:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.370 18:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:18.370 18:59:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.370 [2024-07-24 18:59:55.345557] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:17:18.370 [2024-07-24 18:59:55.345656] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.370 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.370 [2024-07-24 18:59:55.412602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.370 [2024-07-24 18:59:55.526311] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.370 [2024-07-24 18:59:55.526374] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.370 [2024-07-24 18:59:55.526391] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.370 [2024-07-24 18:59:55.526413] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.370 [2024-07-24 18:59:55.526425] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.370 [2024-07-24 18:59:55.526463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.936 18:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:18.936 18:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:18.936 18:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:18.936 18:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:18.936 18:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.936 18:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.936 18:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.864rhk5WtD 00:17:18.936 18:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:18.936 18:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.864rhk5WtD 00:17:18.936 18:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:17:18.936 18:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.936 18:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:17:18.936 18:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.936 18:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.864rhk5WtD 00:17:18.936 18:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.864rhk5WtD 00:17:18.936 18:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:19.222 [2024-07-24 18:59:56.581938] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.222 18:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:19.479 18:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:19.738 [2024-07-24 18:59:57.179589] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:19.738 [2024-07-24 18:59:57.179827] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.738 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:19.996 malloc0 00:17:19.996 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:20.253 18:59:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.864rhk5WtD 00:17:20.511 [2024-07-24 18:59:58.022191] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:20.511 [2024-07-24 18:59:58.022233] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:20.511 [2024-07-24 18:59:58.022272] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:20.511 request: 00:17:20.511 { 00:17:20.511 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.511 "host": "nqn.2016-06.io.spdk:host1", 00:17:20.511 "psk": "/tmp/tmp.864rhk5WtD", 00:17:20.511 "method": "nvmf_subsystem_add_host", 00:17:20.511 "req_id": 1 00:17:20.511 } 00:17:20.511 Got JSON-RPC error response 00:17:20.511 response: 00:17:20.511 { 00:17:20.511 "code": -32603, 00:17:20.511 "message": "Internal error" 00:17:20.511 } 00:17:20.511 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:20.511 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:20.511 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:20.511 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:20.511 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 3169643 00:17:20.511 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3169643 ']' 00:17:20.511 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3169643 00:17:20.511 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:20.511 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:20.511 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3169643 00:17:20.511 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:20.511 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:20.511 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3169643' 00:17:20.511 killing process with pid 3169643 00:17:20.511 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3169643 00:17:20.511 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3169643 00:17:20.769 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.864rhk5WtD 00:17:20.769 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:20.769 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:20.769 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:20.769 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:20.769 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3170071 00:17:20.769 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:20.769 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3170071 00:17:20.769 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3170071 ']' 00:17:20.769 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.769 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:20.769 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.769 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:20.769 18:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:21.028 [2024-07-24 18:59:58.403269] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:17:21.028 [2024-07-24 18:59:58.403352] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.028 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.028 [2024-07-24 18:59:58.471092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.028 [2024-07-24 18:59:58.585456] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.028 [2024-07-24 18:59:58.585520] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.028 [2024-07-24 18:59:58.585536] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:21.028 [2024-07-24 18:59:58.585558] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:21.028 [2024-07-24 18:59:58.585570] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.028 [2024-07-24 18:59:58.585600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.961 18:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:21.961 18:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:21.961 18:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:21.961 18:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:21.961 18:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:21.961 18:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.961 18:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.864rhk5WtD 00:17:21.961 18:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.864rhk5WtD 00:17:21.961 18:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:22.218 [2024-07-24 18:59:59.673734] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.218 18:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:22.476 18:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:22.733 [2024-07-24 19:00:00.231297] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:22.733 [2024-07-24 19:00:00.231562] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:22.733 19:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:22.991 malloc0 00:17:22.991 19:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:23.247 19:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.864rhk5WtD 00:17:23.504 [2024-07-24 19:00:01.013534] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:23.504 19:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3170429 00:17:23.504 19:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:23.504 19:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:23.504 19:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3170429 /var/tmp/bdevperf.sock 00:17:23.504 19:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3170429 ']' 00:17:23.504 19:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:23.504 19:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:23.504 19:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:23.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:23.504 19:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:23.504 19:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:23.504 [2024-07-24 19:00:01.074109] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:17:23.504 [2024-07-24 19:00:01.074193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3170429 ] 00:17:23.504 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.761 [2024-07-24 19:00:01.135711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.761 [2024-07-24 19:00:01.246991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:23.761 19:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:23.761 19:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:23.761 19:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.864rhk5WtD 00:17:24.019 [2024-07-24 19:00:01.588616] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:24.019 [2024-07-24 19:00:01.588746] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:24.276 TLSTESTn1 00:17:24.276 19:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:24.532 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:17:24.532 "subsystems": [ 00:17:24.532 { 00:17:24.532 "subsystem": "keyring", 00:17:24.532 "config": [] 00:17:24.532 }, 00:17:24.532 { 00:17:24.532 "subsystem": "iobuf", 00:17:24.532 "config": [ 00:17:24.532 { 00:17:24.532 "method": "iobuf_set_options", 00:17:24.532 "params": { 00:17:24.532 "small_pool_count": 8192, 00:17:24.532 "large_pool_count": 1024, 00:17:24.532 "small_bufsize": 8192, 00:17:24.532 "large_bufsize": 135168 00:17:24.532 } 00:17:24.532 } 00:17:24.532 ] 00:17:24.532 }, 00:17:24.532 { 00:17:24.532 "subsystem": "sock", 00:17:24.532 "config": [ 00:17:24.532 { 00:17:24.532 "method": "sock_set_default_impl", 00:17:24.532 "params": { 00:17:24.532 "impl_name": "posix" 00:17:24.532 } 00:17:24.532 }, 00:17:24.532 { 00:17:24.532 "method": "sock_impl_set_options", 00:17:24.532 "params": { 00:17:24.532 "impl_name": "ssl", 00:17:24.532 "recv_buf_size": 4096, 00:17:24.532 "send_buf_size": 4096, 00:17:24.532 "enable_recv_pipe": true, 00:17:24.532 "enable_quickack": false, 00:17:24.532 "enable_placement_id": 0, 00:17:24.532 "enable_zerocopy_send_server": true, 00:17:24.532 "enable_zerocopy_send_client": false, 00:17:24.532 "zerocopy_threshold": 0, 00:17:24.532 "tls_version": 0, 00:17:24.532 "enable_ktls": false 00:17:24.532 } 00:17:24.532 }, 00:17:24.532 { 00:17:24.532 "method": "sock_impl_set_options", 00:17:24.532 "params": { 00:17:24.532 "impl_name": "posix", 00:17:24.532 "recv_buf_size": 2097152, 00:17:24.532 "send_buf_size": 2097152, 00:17:24.532 "enable_recv_pipe": true, 00:17:24.532 "enable_quickack": false, 00:17:24.532 "enable_placement_id": 0, 00:17:24.532 "enable_zerocopy_send_server": true, 00:17:24.532 "enable_zerocopy_send_client": false, 00:17:24.532 "zerocopy_threshold": 0, 00:17:24.532 "tls_version": 0, 00:17:24.532 "enable_ktls": false 00:17:24.532 } 00:17:24.532 } 00:17:24.532 ] 00:17:24.532 }, 00:17:24.532 { 00:17:24.532 "subsystem": "vmd", 00:17:24.532 "config": [] 00:17:24.532 }, 00:17:24.532 { 00:17:24.532 "subsystem": "accel", 00:17:24.532 "config": [ 00:17:24.532 { 00:17:24.532 "method": "accel_set_options", 00:17:24.532 "params": { 00:17:24.532 "small_cache_size": 128, 00:17:24.532 "large_cache_size": 16, 00:17:24.532 "task_count": 2048, 00:17:24.532 "sequence_count": 2048, 00:17:24.532 "buf_count": 2048 00:17:24.532 } 00:17:24.532 } 00:17:24.532 ] 00:17:24.532 }, 00:17:24.532 { 00:17:24.532 "subsystem": "bdev", 00:17:24.532 "config": [ 00:17:24.532 { 00:17:24.532 "method": "bdev_set_options", 00:17:24.532 "params": { 00:17:24.532 "bdev_io_pool_size": 65535, 00:17:24.532 "bdev_io_cache_size": 256, 00:17:24.532 "bdev_auto_examine": true, 00:17:24.532 "iobuf_small_cache_size": 128, 00:17:24.532 "iobuf_large_cache_size": 16 00:17:24.532 } 00:17:24.532 }, 00:17:24.532 { 00:17:24.532 "method": "bdev_raid_set_options", 00:17:24.532 "params": { 00:17:24.532 "process_window_size_kb": 1024, 00:17:24.532 "process_max_bandwidth_mb_sec": 0 00:17:24.532 } 00:17:24.532 }, 00:17:24.532 { 00:17:24.532 "method": "bdev_iscsi_set_options", 00:17:24.532 "params": { 00:17:24.532 "timeout_sec": 30 00:17:24.532 } 00:17:24.532 }, 00:17:24.532 { 00:17:24.532 "method": "bdev_nvme_set_options", 00:17:24.532 "params": { 00:17:24.532 "action_on_timeout": "none", 00:17:24.532 "timeout_us": 0, 00:17:24.532 "timeout_admin_us": 0, 00:17:24.532 "keep_alive_timeout_ms": 10000, 00:17:24.532 "arbitration_burst": 0, 00:17:24.532 "low_priority_weight": 0, 00:17:24.532 "medium_priority_weight": 0, 00:17:24.532 "high_priority_weight": 0, 00:17:24.532 "nvme_adminq_poll_period_us": 10000, 00:17:24.532 "nvme_ioq_poll_period_us": 0, 00:17:24.532 "io_queue_requests": 0, 00:17:24.532 "delay_cmd_submit": true, 00:17:24.532 "transport_retry_count": 4, 00:17:24.532 "bdev_retry_count": 3, 00:17:24.532 "transport_ack_timeout": 0, 00:17:24.532 "ctrlr_loss_timeout_sec": 0, 00:17:24.532 "reconnect_delay_sec": 0, 00:17:24.532 "fast_io_fail_timeout_sec": 0, 00:17:24.532 "disable_auto_failback": false, 00:17:24.532 "generate_uuids": false, 00:17:24.532 "transport_tos": 0, 00:17:24.532 "nvme_error_stat": false, 00:17:24.532 "rdma_srq_size": 0, 00:17:24.532 "io_path_stat": false, 00:17:24.532 "allow_accel_sequence": false, 00:17:24.532 "rdma_max_cq_size": 0, 00:17:24.532 "rdma_cm_event_timeout_ms": 0, 00:17:24.532 "dhchap_digests": [ 00:17:24.532 "sha256", 00:17:24.532 "sha384", 00:17:24.532 "sha512" 00:17:24.532 ], 00:17:24.532 "dhchap_dhgroups": [ 00:17:24.532 "null", 00:17:24.532 "ffdhe2048", 00:17:24.532 "ffdhe3072", 00:17:24.532 "ffdhe4096", 00:17:24.532 "ffdhe6144", 00:17:24.532 "ffdhe8192" 00:17:24.532 ] 00:17:24.532 } 00:17:24.532 }, 00:17:24.532 { 00:17:24.532 "method": "bdev_nvme_set_hotplug", 00:17:24.532 "params": { 00:17:24.532 "period_us": 100000, 00:17:24.532 "enable": false 00:17:24.532 } 00:17:24.532 }, 00:17:24.532 { 00:17:24.532 "method": "bdev_malloc_create", 00:17:24.532 "params": { 00:17:24.532 "name": "malloc0", 00:17:24.532 "num_blocks": 8192, 00:17:24.532 "block_size": 4096, 00:17:24.532 "physical_block_size": 4096, 00:17:24.532 "uuid": "4c5758dd-26ae-4125-8e8b-dd22f16e54ef", 00:17:24.532 "optimal_io_boundary": 0, 00:17:24.532 "md_size": 0, 00:17:24.532 "dif_type": 0, 00:17:24.532 "dif_is_head_of_md": false, 00:17:24.532 "dif_pi_format": 0 00:17:24.532 } 00:17:24.532 }, 00:17:24.532 { 00:17:24.532 "method": "bdev_wait_for_examine" 00:17:24.532 } 00:17:24.532 ] 00:17:24.532 }, 00:17:24.532 { 00:17:24.532 "subsystem": "nbd", 00:17:24.532 "config": [] 00:17:24.532 }, 00:17:24.532 { 00:17:24.532 "subsystem": "scheduler", 00:17:24.532 "config": [ 00:17:24.532 { 00:17:24.532 "method": "framework_set_scheduler", 00:17:24.532 "params": { 00:17:24.532 "name": "static" 00:17:24.532 } 00:17:24.532 } 00:17:24.532 ] 00:17:24.532 }, 00:17:24.532 { 00:17:24.532 "subsystem": "nvmf", 00:17:24.532 "config": [ 00:17:24.532 { 00:17:24.532 "method": "nvmf_set_config", 00:17:24.532 "params": { 00:17:24.532 "discovery_filter": "match_any", 00:17:24.532 "admin_cmd_passthru": { 00:17:24.532 "identify_ctrlr": false 00:17:24.533 } 00:17:24.533 } 00:17:24.533 }, 00:17:24.533 { 00:17:24.533 "method": "nvmf_set_max_subsystems", 00:17:24.533 "params": { 00:17:24.533 "max_subsystems": 1024 00:17:24.533 } 00:17:24.533 }, 00:17:24.533 { 00:17:24.533 "method": "nvmf_set_crdt", 00:17:24.533 "params": { 00:17:24.533 "crdt1": 0, 00:17:24.533 "crdt2": 0, 00:17:24.533 "crdt3": 0 00:17:24.533 } 00:17:24.533 }, 00:17:24.533 { 00:17:24.533 "method": "nvmf_create_transport", 00:17:24.533 "params": { 00:17:24.533 "trtype": "TCP", 00:17:24.533 "max_queue_depth": 128, 00:17:24.533 "max_io_qpairs_per_ctrlr": 127, 00:17:24.533 "in_capsule_data_size": 4096, 00:17:24.533 "max_io_size": 131072, 00:17:24.533 "io_unit_size": 131072, 00:17:24.533 "max_aq_depth": 128, 00:17:24.533 "num_shared_buffers": 511, 00:17:24.533 "buf_cache_size": 4294967295, 00:17:24.533 "dif_insert_or_strip": false, 00:17:24.533 "zcopy": false, 00:17:24.533 "c2h_success": false, 00:17:24.533 "sock_priority": 0, 00:17:24.533 "abort_timeout_sec": 1, 00:17:24.533 "ack_timeout": 0, 00:17:24.533 "data_wr_pool_size": 0 00:17:24.533 } 00:17:24.533 }, 00:17:24.533 { 00:17:24.533 "method": "nvmf_create_subsystem", 00:17:24.533 "params": { 00:17:24.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:24.533 "allow_any_host": false, 00:17:24.533 "serial_number": "SPDK00000000000001", 00:17:24.533 "model_number": "SPDK bdev Controller", 00:17:24.533 "max_namespaces": 10, 00:17:24.533 "min_cntlid": 1, 00:17:24.533 "max_cntlid": 65519, 00:17:24.533 "ana_reporting": false 00:17:24.533 } 00:17:24.533 }, 00:17:24.533 { 00:17:24.533 "method": "nvmf_subsystem_add_host", 00:17:24.533 "params": { 00:17:24.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:24.533 "host": "nqn.2016-06.io.spdk:host1", 00:17:24.533 "psk": "/tmp/tmp.864rhk5WtD" 00:17:24.533 } 00:17:24.533 }, 00:17:24.533 { 00:17:24.533 "method": "nvmf_subsystem_add_ns", 00:17:24.533 "params": { 00:17:24.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:24.533 "namespace": { 00:17:24.533 "nsid": 1, 00:17:24.533 "bdev_name": "malloc0", 00:17:24.533 "nguid": "4C5758DD26AE41258E8BDD22F16E54EF", 00:17:24.533 "uuid": "4c5758dd-26ae-4125-8e8b-dd22f16e54ef", 00:17:24.533 "no_auto_visible": false 00:17:24.533 } 00:17:24.533 } 00:17:24.533 }, 00:17:24.533 { 00:17:24.533 "method": "nvmf_subsystem_add_listener", 00:17:24.533 "params": { 00:17:24.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:24.533 "listen_address": { 00:17:24.533 "trtype": "TCP", 00:17:24.533 "adrfam": "IPv4", 00:17:24.533 "traddr": "10.0.0.2", 00:17:24.533 "trsvcid": "4420" 00:17:24.533 }, 00:17:24.533 "secure_channel": true 00:17:24.533 } 00:17:24.533 } 00:17:24.533 ] 00:17:24.533 } 00:17:24.533 ] 00:17:24.533 }' 00:17:24.533 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:24.789 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:17:24.789 "subsystems": [ 00:17:24.789 { 00:17:24.789 "subsystem": "keyring", 00:17:24.789 "config": [] 00:17:24.789 }, 00:17:24.789 { 00:17:24.789 "subsystem": "iobuf", 00:17:24.789 "config": [ 00:17:24.789 { 00:17:24.789 "method": "iobuf_set_options", 00:17:24.789 "params": { 00:17:24.789 "small_pool_count": 8192, 00:17:24.789 "large_pool_count": 1024, 00:17:24.789 "small_bufsize": 8192, 00:17:24.789 "large_bufsize": 135168 00:17:24.789 } 00:17:24.789 } 00:17:24.789 ] 00:17:24.789 }, 00:17:24.789 { 00:17:24.789 "subsystem": "sock", 00:17:24.789 "config": [ 00:17:24.789 { 00:17:24.789 "method": "sock_set_default_impl", 00:17:24.789 "params": { 00:17:24.789 "impl_name": "posix" 00:17:24.789 } 00:17:24.789 }, 00:17:24.789 { 00:17:24.789 "method": "sock_impl_set_options", 00:17:24.789 "params": { 00:17:24.789 "impl_name": "ssl", 00:17:24.789 "recv_buf_size": 4096, 00:17:24.789 "send_buf_size": 4096, 00:17:24.789 "enable_recv_pipe": true, 00:17:24.789 "enable_quickack": false, 00:17:24.789 "enable_placement_id": 0, 00:17:24.789 "enable_zerocopy_send_server": true, 00:17:24.789 "enable_zerocopy_send_client": false, 00:17:24.789 "zerocopy_threshold": 0, 00:17:24.789 "tls_version": 0, 00:17:24.789 "enable_ktls": false 00:17:24.789 } 00:17:24.789 }, 00:17:24.789 { 00:17:24.789 "method": "sock_impl_set_options", 00:17:24.789 "params": { 00:17:24.789 "impl_name": "posix", 00:17:24.789 "recv_buf_size": 2097152, 00:17:24.789 "send_buf_size": 2097152, 00:17:24.789 "enable_recv_pipe": true, 00:17:24.789 "enable_quickack": false, 00:17:24.789 "enable_placement_id": 0, 00:17:24.789 "enable_zerocopy_send_server": true, 00:17:24.789 "enable_zerocopy_send_client": false, 00:17:24.789 "zerocopy_threshold": 0, 00:17:24.789 "tls_version": 0, 00:17:24.789 "enable_ktls": false 00:17:24.789 } 00:17:24.789 } 00:17:24.789 ] 00:17:24.789 }, 00:17:24.789 { 00:17:24.789 "subsystem": "vmd", 00:17:24.789 "config": [] 00:17:24.789 }, 00:17:24.789 { 00:17:24.789 "subsystem": "accel", 00:17:24.789 "config": [ 00:17:24.789 { 00:17:24.789 "method": "accel_set_options", 00:17:24.789 "params": { 00:17:24.789 "small_cache_size": 128, 00:17:24.789 "large_cache_size": 16, 00:17:24.789 "task_count": 2048, 00:17:24.789 "sequence_count": 2048, 00:17:24.789 "buf_count": 2048 00:17:24.789 } 00:17:24.789 } 00:17:24.789 ] 00:17:24.789 }, 00:17:24.789 { 00:17:24.789 "subsystem": "bdev", 00:17:24.789 "config": [ 00:17:24.789 { 00:17:24.789 "method": "bdev_set_options", 00:17:24.789 "params": { 00:17:24.789 "bdev_io_pool_size": 65535, 00:17:24.789 "bdev_io_cache_size": 256, 00:17:24.789 "bdev_auto_examine": true, 00:17:24.789 "iobuf_small_cache_size": 128, 00:17:24.789 "iobuf_large_cache_size": 16 00:17:24.789 } 00:17:24.789 }, 00:17:24.789 { 00:17:24.789 "method": "bdev_raid_set_options", 00:17:24.789 "params": { 00:17:24.789 "process_window_size_kb": 1024, 00:17:24.789 "process_max_bandwidth_mb_sec": 0 00:17:24.789 } 00:17:24.789 }, 00:17:24.789 { 00:17:24.789 "method": "bdev_iscsi_set_options", 00:17:24.789 "params": { 00:17:24.789 "timeout_sec": 30 00:17:24.789 } 00:17:24.789 }, 00:17:24.789 { 00:17:24.789 "method": "bdev_nvme_set_options", 00:17:24.789 "params": { 00:17:24.789 "action_on_timeout": "none", 00:17:24.789 "timeout_us": 0, 00:17:24.789 "timeout_admin_us": 0, 00:17:24.789 "keep_alive_timeout_ms": 10000, 00:17:24.789 "arbitration_burst": 0, 00:17:24.789 "low_priority_weight": 0, 00:17:24.789 "medium_priority_weight": 0, 00:17:24.789 "high_priority_weight": 0, 00:17:24.789 "nvme_adminq_poll_period_us": 10000, 00:17:24.789 "nvme_ioq_poll_period_us": 0, 00:17:24.789 "io_queue_requests": 512, 00:17:24.789 "delay_cmd_submit": true, 00:17:24.789 "transport_retry_count": 4, 00:17:24.789 "bdev_retry_count": 3, 00:17:24.789 "transport_ack_timeout": 0, 00:17:24.789 "ctrlr_loss_timeout_sec": 0, 00:17:24.789 "reconnect_delay_sec": 0, 00:17:24.789 "fast_io_fail_timeout_sec": 0, 00:17:24.789 "disable_auto_failback": false, 00:17:24.789 "generate_uuids": false, 00:17:24.789 "transport_tos": 0, 00:17:24.789 "nvme_error_stat": false, 00:17:24.789 "rdma_srq_size": 0, 00:17:24.789 "io_path_stat": false, 00:17:24.789 "allow_accel_sequence": false, 00:17:24.789 "rdma_max_cq_size": 0, 00:17:24.789 "rdma_cm_event_timeout_ms": 0, 00:17:24.789 "dhchap_digests": [ 00:17:24.789 "sha256", 00:17:24.789 "sha384", 00:17:24.789 "sha512" 00:17:24.789 ], 00:17:24.789 "dhchap_dhgroups": [ 00:17:24.789 "null", 00:17:24.789 "ffdhe2048", 00:17:24.789 "ffdhe3072", 00:17:24.789 "ffdhe4096", 00:17:24.789 "ffdhe6144", 00:17:24.789 "ffdhe8192" 00:17:24.789 ] 00:17:24.789 } 00:17:24.789 }, 00:17:24.789 { 00:17:24.789 "method": "bdev_nvme_attach_controller", 00:17:24.789 "params": { 00:17:24.789 "name": "TLSTEST", 00:17:24.789 "trtype": "TCP", 00:17:24.789 "adrfam": "IPv4", 00:17:24.789 "traddr": "10.0.0.2", 00:17:24.789 "trsvcid": "4420", 00:17:24.789 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:24.789 "prchk_reftag": false, 00:17:24.789 "prchk_guard": false, 00:17:24.789 "ctrlr_loss_timeout_sec": 0, 00:17:24.789 "reconnect_delay_sec": 0, 00:17:24.789 "fast_io_fail_timeout_sec": 0, 00:17:24.789 "psk": "/tmp/tmp.864rhk5WtD", 00:17:24.789 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:24.789 "hdgst": false, 00:17:24.789 "ddgst": false 00:17:24.789 } 00:17:24.789 }, 00:17:24.789 { 00:17:24.789 "method": "bdev_nvme_set_hotplug", 00:17:24.789 "params": { 00:17:24.789 "period_us": 100000, 00:17:24.789 "enable": false 00:17:24.789 } 00:17:24.789 }, 00:17:24.789 { 00:17:24.789 "method": "bdev_wait_for_examine" 00:17:24.789 } 00:17:24.789 ] 00:17:24.789 }, 00:17:24.789 { 00:17:24.789 "subsystem": "nbd", 00:17:24.789 "config": [] 00:17:24.789 } 00:17:24.789 ] 00:17:24.789 }' 00:17:24.789 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 3170429 00:17:24.789 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3170429 ']' 00:17:24.789 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3170429 00:17:24.789 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:24.789 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:24.789 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3170429 00:17:24.789 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:24.789 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:24.789 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3170429' 00:17:24.789 killing process with pid 3170429 00:17:24.789 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3170429 00:17:24.789 Received shutdown signal, test time was about 10.000000 seconds 00:17:24.789 00:17:24.789 Latency(us) 00:17:24.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.789 =================================================================================================================== 00:17:24.789 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:24.789 [2024-07-24 19:00:02.359826] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:24.789 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3170429 00:17:25.044 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 3170071 00:17:25.044 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3170071 ']' 00:17:25.044 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3170071 00:17:25.044 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:25.044 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:25.044 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3170071 00:17:25.301 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:25.301 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:25.301 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3170071' 00:17:25.301 killing process with pid 3170071 00:17:25.301 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3170071 00:17:25.301 [2024-07-24 19:00:02.655120] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:25.301 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3170071 00:17:25.559 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:25.559 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:25.559 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:17:25.559 "subsystems": [ 00:17:25.559 { 00:17:25.559 "subsystem": "keyring", 00:17:25.559 "config": [] 00:17:25.559 }, 00:17:25.559 { 00:17:25.559 "subsystem": "iobuf", 00:17:25.559 "config": [ 00:17:25.559 { 00:17:25.559 "method": "iobuf_set_options", 00:17:25.559 "params": { 00:17:25.559 "small_pool_count": 8192, 00:17:25.559 "large_pool_count": 1024, 00:17:25.559 "small_bufsize": 8192, 00:17:25.559 "large_bufsize": 135168 00:17:25.559 } 00:17:25.559 } 00:17:25.559 ] 00:17:25.559 }, 00:17:25.559 { 00:17:25.559 "subsystem": "sock", 00:17:25.559 "config": [ 00:17:25.559 { 00:17:25.559 "method": "sock_set_default_impl", 00:17:25.559 "params": { 00:17:25.559 "impl_name": "posix" 00:17:25.559 } 00:17:25.559 }, 00:17:25.559 { 00:17:25.559 "method": "sock_impl_set_options", 00:17:25.559 "params": { 00:17:25.559 "impl_name": "ssl", 00:17:25.559 "recv_buf_size": 4096, 00:17:25.559 "send_buf_size": 4096, 00:17:25.559 "enable_recv_pipe": true, 00:17:25.559 "enable_quickack": false, 00:17:25.559 "enable_placement_id": 0, 00:17:25.559 "enable_zerocopy_send_server": true, 00:17:25.559 "enable_zerocopy_send_client": false, 00:17:25.559 "zerocopy_threshold": 0, 00:17:25.559 "tls_version": 0, 00:17:25.559 "enable_ktls": false 00:17:25.559 } 00:17:25.559 }, 00:17:25.559 { 00:17:25.559 "method": "sock_impl_set_options", 00:17:25.559 "params": { 00:17:25.559 "impl_name": "posix", 00:17:25.559 "recv_buf_size": 2097152, 00:17:25.559 "send_buf_size": 2097152, 00:17:25.559 "enable_recv_pipe": true, 00:17:25.559 "enable_quickack": false, 00:17:25.559 "enable_placement_id": 0, 00:17:25.559 "enable_zerocopy_send_server": true, 00:17:25.559 "enable_zerocopy_send_client": false, 00:17:25.559 "zerocopy_threshold": 0, 00:17:25.559 "tls_version": 0, 00:17:25.559 "enable_ktls": false 00:17:25.559 } 00:17:25.559 } 00:17:25.559 ] 00:17:25.559 }, 00:17:25.559 { 00:17:25.559 "subsystem": "vmd", 00:17:25.559 "config": [] 00:17:25.559 }, 00:17:25.559 { 00:17:25.559 "subsystem": "accel", 00:17:25.559 "config": [ 00:17:25.559 { 00:17:25.559 "method": "accel_set_options", 00:17:25.559 "params": { 00:17:25.559 "small_cache_size": 128, 00:17:25.559 "large_cache_size": 16, 00:17:25.559 "task_count": 2048, 00:17:25.559 "sequence_count": 2048, 00:17:25.559 "buf_count": 2048 00:17:25.559 } 00:17:25.559 } 00:17:25.559 ] 00:17:25.559 }, 00:17:25.559 { 00:17:25.559 "subsystem": "bdev", 00:17:25.559 "config": [ 00:17:25.559 { 00:17:25.559 "method": "bdev_set_options", 00:17:25.559 "params": { 00:17:25.559 "bdev_io_pool_size": 65535, 00:17:25.559 "bdev_io_cache_size": 256, 00:17:25.559 "bdev_auto_examine": true, 00:17:25.559 "iobuf_small_cache_size": 128, 00:17:25.559 "iobuf_large_cache_size": 16 00:17:25.559 } 00:17:25.559 }, 00:17:25.559 { 00:17:25.559 "method": "bdev_raid_set_options", 00:17:25.559 "params": { 00:17:25.559 "process_window_size_kb": 1024, 00:17:25.559 "process_max_bandwidth_mb_sec": 0 00:17:25.559 } 00:17:25.559 }, 00:17:25.559 { 00:17:25.559 "method": "bdev_iscsi_set_options", 00:17:25.559 "params": { 00:17:25.559 "timeout_sec": 30 00:17:25.559 } 00:17:25.559 }, 00:17:25.559 { 00:17:25.559 "method": "bdev_nvme_set_options", 00:17:25.559 "params": { 00:17:25.559 "action_on_timeout": "none", 00:17:25.559 "timeout_us": 0, 00:17:25.559 "timeout_admin_us": 0, 00:17:25.559 "keep_alive_timeout_ms": 10000, 00:17:25.559 "arbitration_burst": 0, 00:17:25.559 "low_priority_weight": 0, 00:17:25.559 "medium_priority_weight": 0, 00:17:25.559 "high_priority_weight": 0, 00:17:25.559 "nvme_adminq_poll_period_us": 10000, 00:17:25.559 "nvme_ioq_poll_period_us": 0, 00:17:25.559 "io_queue_requests": 0, 00:17:25.559 "delay_cmd_submit": true, 00:17:25.559 "transport_retry_count": 4, 00:17:25.559 "bdev_retry_count": 3, 00:17:25.559 "transport_ack_timeout": 0, 00:17:25.559 "ctrlr_loss_timeout_sec": 0, 00:17:25.559 "reconnect_delay_sec": 0, 00:17:25.559 "fast_io_fail_timeout_sec": 0, 00:17:25.559 "disable_auto_failback": false, 00:17:25.559 "generate_uuids": false, 00:17:25.559 "transport_tos": 0, 00:17:25.559 "nvme_error_stat": false, 00:17:25.559 "rdma_srq_size": 0, 00:17:25.559 "io_path_stat": false, 00:17:25.559 "allow_accel_sequence": false, 00:17:25.559 "rdma_max_cq_size": 0, 00:17:25.559 "rdma_cm_event_timeout_ms": 0, 00:17:25.559 "dhchap_digests": [ 00:17:25.559 "sha256", 00:17:25.559 "sha384", 00:17:25.559 "sha512" 00:17:25.559 ], 00:17:25.559 "dhchap_dhgroups": [ 00:17:25.559 "null", 00:17:25.559 "ffdhe2048", 00:17:25.559 "ffdhe3072", 00:17:25.559 "ffdhe4096", 00:17:25.559 "ffdhe6144", 00:17:25.559 "ffdhe8192" 00:17:25.559 ] 00:17:25.559 } 00:17:25.559 }, 00:17:25.559 { 00:17:25.559 "method": "bdev_nvme_set_hotplug", 00:17:25.559 "params": { 00:17:25.559 "period_us": 100000, 00:17:25.559 "enable": false 00:17:25.559 } 00:17:25.559 }, 00:17:25.559 { 00:17:25.559 "method": "bdev_malloc_create", 00:17:25.559 "params": { 00:17:25.559 "name": "malloc0", 00:17:25.559 "num_blocks": 8192, 00:17:25.559 "block_size": 4096, 00:17:25.559 "physical_block_size": 4096, 00:17:25.559 "uuid": "4c5758dd-26ae-4125-8e8b-dd22f16e54ef", 00:17:25.559 "optimal_io_boundary": 0, 00:17:25.559 "md_size": 0, 00:17:25.559 "dif_type": 0, 00:17:25.559 "dif_is_head_of_md": false, 00:17:25.559 "dif_pi_format": 0 00:17:25.559 } 00:17:25.559 }, 00:17:25.559 { 00:17:25.559 "method": "bdev_wait_for_examine" 00:17:25.559 } 00:17:25.559 ] 00:17:25.559 }, 00:17:25.559 { 00:17:25.559 "subsystem": "nbd", 00:17:25.559 "config": [] 00:17:25.559 }, 00:17:25.559 { 00:17:25.559 "subsystem": "scheduler", 00:17:25.559 "config": [ 00:17:25.559 { 00:17:25.559 "method": "framework_set_scheduler", 00:17:25.559 "params": { 00:17:25.559 "name": "static" 00:17:25.559 } 00:17:25.559 } 00:17:25.559 ] 00:17:25.559 }, 00:17:25.559 { 00:17:25.559 "subsystem": "nvmf", 00:17:25.559 "config": [ 00:17:25.559 { 00:17:25.559 "method": "nvmf_set_config", 00:17:25.559 "params": { 00:17:25.559 "discovery_filter": "match_any", 00:17:25.559 "admin_cmd_passthru": { 00:17:25.559 "identify_ctrlr": false 00:17:25.559 } 00:17:25.559 } 00:17:25.559 }, 00:17:25.559 { 00:17:25.559 "method": "nvmf_set_max_subsystems", 00:17:25.560 "params": { 00:17:25.560 "max_subsystems": 1024 00:17:25.560 } 00:17:25.560 }, 00:17:25.560 { 00:17:25.560 "method": "nvmf_set_crdt", 00:17:25.560 "params": { 00:17:25.560 "crdt1": 0, 00:17:25.560 "crdt2": 0, 00:17:25.560 "crdt3": 0 00:17:25.560 } 00:17:25.560 }, 00:17:25.560 { 00:17:25.560 "method": "nvmf_create_transport", 00:17:25.560 "params": { 00:17:25.560 "trtype": "TCP", 00:17:25.560 "max_queue_depth": 128, 00:17:25.560 "max_io_qpairs_per_ctrlr": 127, 00:17:25.560 "in_capsule_data_size": 4096, 00:17:25.560 "max_io_size": 131072, 00:17:25.560 "io_unit_size": 131072, 00:17:25.560 "max_aq_depth": 128, 00:17:25.560 "num_shared_buffers": 511, 00:17:25.560 "buf_cache_size": 4294967295, 00:17:25.560 "dif_insert_or_strip": false, 00:17:25.560 "zcopy": false, 00:17:25.560 "c2h_success": false, 00:17:25.560 "sock_priority": 0, 00:17:25.560 "abort_timeout_sec": 1, 00:17:25.560 "ack_timeout": 0, 00:17:25.560 "data_wr_pool_size": 0 00:17:25.560 } 00:17:25.560 }, 00:17:25.560 { 00:17:25.560 "method": "nvmf_create_subsystem", 00:17:25.560 "params": { 00:17:25.560 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.560 "allow_any_host": false, 00:17:25.560 "serial_number": "SPDK00000000000001", 00:17:25.560 "model_number": "SPDK bdev Controller", 00:17:25.560 "max_namespaces": 10, 00:17:25.560 "min_cntlid": 1, 00:17:25.560 "max_cntlid": 65519, 00:17:25.560 "ana_reporting": false 00:17:25.560 } 00:17:25.560 }, 00:17:25.560 { 00:17:25.560 "method": "nvmf_subsystem_add_host", 00:17:25.560 "params": { 00:17:25.560 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.560 "host": "nqn.2016-06.io.spdk:host1", 00:17:25.560 "psk": "/tmp/tmp.864rhk5WtD" 00:17:25.560 } 00:17:25.560 }, 00:17:25.560 { 00:17:25.560 "method": "nvmf_subsystem_add_ns", 00:17:25.560 "params": { 00:17:25.560 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.560 "namespace": { 00:17:25.560 "nsid": 1, 00:17:25.560 "bdev_name": "malloc0", 00:17:25.560 "nguid": "4C5758DD26AE41258E8BDD22F16E54EF", 00:17:25.560 "uuid": "4c5758dd-26ae-4125-8e8b-dd22f16e54ef", 00:17:25.560 "no_auto_visible": false 00:17:25.560 } 00:17:25.560 } 00:17:25.560 }, 00:17:25.560 { 00:17:25.560 "method": "nvmf_subsystem_add_listener", 00:17:25.560 "params": { 00:17:25.560 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.560 "listen_address": { 00:17:25.560 "trtype": "TCP", 00:17:25.560 "adrfam": "IPv4", 00:17:25.560 "traddr": "10.0.0.2", 00:17:25.560 "trsvcid": "4420" 00:17:25.560 }, 00:17:25.560 "secure_channel": true 00:17:25.560 } 00:17:25.560 } 00:17:25.560 ] 00:17:25.560 } 00:17:25.560 ] 00:17:25.560 }' 00:17:25.560 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:25.560 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:25.560 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3170758 00:17:25.560 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:25.560 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3170758 00:17:25.560 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3170758 ']' 00:17:25.560 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.560 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:25.560 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.560 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:25.560 19:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:25.560 [2024-07-24 19:00:02.982216] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:17:25.560 [2024-07-24 19:00:02.982291] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.560 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.560 [2024-07-24 19:00:03.044884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.560 [2024-07-24 19:00:03.156950] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.560 [2024-07-24 19:00:03.157015] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.560 [2024-07-24 19:00:03.157045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.560 [2024-07-24 19:00:03.157058] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.560 [2024-07-24 19:00:03.157075] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.560 [2024-07-24 19:00:03.157162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.817 [2024-07-24 19:00:03.401647] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.075 [2024-07-24 19:00:03.429789] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:26.075 [2024-07-24 19:00:03.445849] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:26.075 [2024-07-24 19:00:03.446096] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.639 19:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:26.639 19:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:26.639 19:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:26.639 19:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:26.639 19:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:26.639 19:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.639 19:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3170906 00:17:26.639 19:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3170906 /var/tmp/bdevperf.sock 00:17:26.639 19:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3170906 ']' 00:17:26.639 19:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:26.639 19:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:26.639 19:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:26.639 19:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:26.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:26.639 19:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:26.639 19:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:17:26.639 "subsystems": [ 00:17:26.639 { 00:17:26.639 "subsystem": "keyring", 00:17:26.639 "config": [] 00:17:26.639 }, 00:17:26.639 { 00:17:26.639 "subsystem": "iobuf", 00:17:26.639 "config": [ 00:17:26.639 { 00:17:26.639 "method": "iobuf_set_options", 00:17:26.639 "params": { 00:17:26.639 "small_pool_count": 8192, 00:17:26.639 "large_pool_count": 1024, 00:17:26.639 "small_bufsize": 8192, 00:17:26.639 "large_bufsize": 135168 00:17:26.639 } 00:17:26.639 } 00:17:26.639 ] 00:17:26.639 }, 00:17:26.639 { 00:17:26.639 "subsystem": "sock", 00:17:26.639 "config": [ 00:17:26.639 { 00:17:26.639 "method": "sock_set_default_impl", 00:17:26.639 "params": { 00:17:26.639 "impl_name": "posix" 00:17:26.639 } 00:17:26.639 }, 00:17:26.639 { 00:17:26.639 "method": "sock_impl_set_options", 00:17:26.639 "params": { 00:17:26.639 "impl_name": "ssl", 00:17:26.639 "recv_buf_size": 4096, 00:17:26.639 "send_buf_size": 4096, 00:17:26.639 "enable_recv_pipe": true, 00:17:26.639 "enable_quickack": false, 00:17:26.639 "enable_placement_id": 0, 00:17:26.639 "enable_zerocopy_send_server": true, 00:17:26.639 "enable_zerocopy_send_client": false, 00:17:26.639 "zerocopy_threshold": 0, 00:17:26.639 "tls_version": 0, 00:17:26.639 "enable_ktls": false 00:17:26.639 } 00:17:26.639 }, 00:17:26.639 { 00:17:26.639 "method": "sock_impl_set_options", 00:17:26.639 "params": { 00:17:26.639 "impl_name": "posix", 00:17:26.639 "recv_buf_size": 2097152, 00:17:26.639 "send_buf_size": 2097152, 00:17:26.639 "enable_recv_pipe": true, 00:17:26.639 "enable_quickack": false, 00:17:26.639 "enable_placement_id": 0, 00:17:26.639 "enable_zerocopy_send_server": true, 00:17:26.639 "enable_zerocopy_send_client": false, 00:17:26.639 "zerocopy_threshold": 0, 00:17:26.639 "tls_version": 0, 00:17:26.639 "enable_ktls": false 00:17:26.639 } 00:17:26.639 } 00:17:26.639 ] 00:17:26.639 }, 00:17:26.639 { 00:17:26.639 "subsystem": "vmd", 00:17:26.639 "config": [] 00:17:26.639 }, 00:17:26.639 { 00:17:26.639 "subsystem": "accel", 00:17:26.639 "config": [ 00:17:26.639 { 00:17:26.639 "method": "accel_set_options", 00:17:26.639 "params": { 00:17:26.639 "small_cache_size": 128, 00:17:26.639 "large_cache_size": 16, 00:17:26.639 "task_count": 2048, 00:17:26.639 "sequence_count": 2048, 00:17:26.639 "buf_count": 2048 00:17:26.639 } 00:17:26.639 } 00:17:26.639 ] 00:17:26.639 }, 00:17:26.639 { 00:17:26.639 "subsystem": "bdev", 00:17:26.639 "config": [ 00:17:26.639 { 00:17:26.639 "method": "bdev_set_options", 00:17:26.639 "params": { 00:17:26.639 "bdev_io_pool_size": 65535, 00:17:26.639 "bdev_io_cache_size": 256, 00:17:26.639 "bdev_auto_examine": true, 00:17:26.639 "iobuf_small_cache_size": 128, 00:17:26.639 "iobuf_large_cache_size": 16 00:17:26.639 } 00:17:26.639 }, 00:17:26.639 { 00:17:26.639 "method": "bdev_raid_set_options", 00:17:26.639 "params": { 00:17:26.639 "process_window_size_kb": 1024, 00:17:26.639 "process_max_bandwidth_mb_sec": 0 00:17:26.639 } 00:17:26.639 }, 00:17:26.639 { 00:17:26.639 "method": "bdev_iscsi_set_options", 00:17:26.639 "params": { 00:17:26.639 "timeout_sec": 30 00:17:26.639 } 00:17:26.639 }, 00:17:26.639 { 00:17:26.639 "method": "bdev_nvme_set_options", 00:17:26.639 "params": { 00:17:26.639 "action_on_timeout": "none", 00:17:26.639 "timeout_us": 0, 00:17:26.639 "timeout_admin_us": 0, 00:17:26.639 "keep_alive_timeout_ms": 10000, 00:17:26.639 "arbitration_burst": 0, 00:17:26.639 "low_priority_weight": 0, 00:17:26.639 "medium_priority_weight": 0, 00:17:26.639 "high_priority_weight": 0, 00:17:26.639 "nvme_adminq_poll_period_us": 10000, 00:17:26.639 "nvme_ioq_poll_period_us": 0, 00:17:26.639 "io_queue_requests": 512, 00:17:26.639 "delay_cmd_submit": true, 00:17:26.639 "transport_retry_count": 4, 00:17:26.639 "bdev_retry_count": 3, 00:17:26.639 "transport_ack_timeout": 0, 00:17:26.639 "ctrlr_loss_timeout_sec": 0, 00:17:26.639 "reconnect_delay_sec": 0, 00:17:26.639 "fast_io_fail_timeout_sec": 0, 00:17:26.640 "disable_auto_failback": false, 00:17:26.640 "generate_uuids": false, 00:17:26.640 "transport_tos": 0, 00:17:26.640 "nvme_error_stat": false, 00:17:26.640 "rdma_srq_size": 0, 00:17:26.640 "io_path_stat": false, 00:17:26.640 "allow_accel_sequence": false, 00:17:26.640 "rdma_max_cq_size": 0, 00:17:26.640 "rdma_cm_event_timeout_ms": 0, 00:17:26.640 "dhchap_digests": [ 00:17:26.640 "sha256", 00:17:26.640 "sha384", 00:17:26.640 "sha512" 00:17:26.640 ], 00:17:26.640 "dhchap_dhgroups": [ 00:17:26.640 "null", 00:17:26.640 "ffdhe2048", 00:17:26.640 "ffdhe3072", 00:17:26.640 "ffdhe4096", 00:17:26.640 "ffdhe6144", 00:17:26.640 "ffdhe8192" 00:17:26.640 ] 00:17:26.640 } 00:17:26.640 }, 00:17:26.640 { 00:17:26.640 "method": "bdev_nvme_attach_controller", 00:17:26.640 "params": { 00:17:26.640 "name": "TLSTEST", 00:17:26.640 "trtype": "TCP", 00:17:26.640 "adrfam": "IPv4", 00:17:26.640 "traddr": "10.0.0.2", 00:17:26.640 "trsvcid": "4420", 00:17:26.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.640 "prchk_reftag": false, 00:17:26.640 "prchk_guard": false, 00:17:26.640 "ctrlr_loss_timeout_sec": 0, 00:17:26.640 "reconnect_delay_sec": 0, 00:17:26.640 "fast_io_fail_timeout_sec": 0, 00:17:26.640 "psk": "/tmp/tmp.864rhk5WtD", 00:17:26.640 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:26.640 "hdgst": false, 00:17:26.640 "ddgst": false 00:17:26.640 } 00:17:26.640 }, 00:17:26.640 { 00:17:26.640 "method": "bdev_nvme_set_hotplug", 00:17:26.640 "params": { 00:17:26.640 "period_us": 100000, 00:17:26.640 "enable": false 00:17:26.640 } 00:17:26.640 }, 00:17:26.640 { 00:17:26.640 "method": "bdev_wait_for_examine" 00:17:26.640 } 00:17:26.640 ] 00:17:26.640 }, 00:17:26.640 { 00:17:26.640 "subsystem": "nbd", 00:17:26.640 "config": [] 00:17:26.640 } 00:17:26.640 ] 00:17:26.640 }' 00:17:26.640 19:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:26.640 [2024-07-24 19:00:04.037765] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:17:26.640 [2024-07-24 19:00:04.037849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3170906 ] 00:17:26.640 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.640 [2024-07-24 19:00:04.094742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.640 [2024-07-24 19:00:04.200721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.896 [2024-07-24 19:00:04.373212] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:26.896 [2024-07-24 19:00:04.373343] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:27.833 19:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:27.833 19:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:27.833 19:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:27.833 Running I/O for 10 seconds... 00:17:37.791 00:17:37.791 Latency(us) 00:17:37.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.791 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:37.791 Verification LBA range: start 0x0 length 0x2000 00:17:37.791 TLSTESTn1 : 10.06 2016.47 7.88 0.00 0.00 63295.65 6310.87 85827.89 00:17:37.791 =================================================================================================================== 00:17:37.791 Total : 2016.47 7.88 0.00 0.00 63295.65 6310.87 85827.89 00:17:37.791 0 00:17:37.791 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:37.791 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 3170906 00:17:37.791 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3170906 ']' 00:17:37.791 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3170906 00:17:37.791 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:37.791 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:37.791 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3170906 00:17:37.791 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:37.791 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:37.791 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3170906' 00:17:37.791 killing process with pid 3170906 00:17:37.791 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3170906 00:17:37.791 Received shutdown signal, test time was about 10.000000 seconds 00:17:37.791 00:17:37.791 Latency(us) 00:17:37.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.791 =================================================================================================================== 00:17:37.791 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:37.791 [2024-07-24 19:00:15.314618] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:37.791 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3170906 00:17:38.049 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 3170758 00:17:38.049 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3170758 ']' 00:17:38.049 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3170758 00:17:38.049 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:38.049 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:38.049 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3170758 00:17:38.049 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:38.049 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:38.049 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3170758' 00:17:38.049 killing process with pid 3170758 00:17:38.049 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3170758 00:17:38.049 [2024-07-24 19:00:15.611605] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:38.049 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3170758 00:17:38.307 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:17:38.307 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:38.307 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:38.307 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.307 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3172740 00:17:38.307 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:38.307 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3172740 00:17:38.307 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3172740 ']' 00:17:38.307 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.307 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:38.307 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.307 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:38.307 19:00:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.580 [2024-07-24 19:00:15.948751] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:17:38.580 [2024-07-24 19:00:15.948837] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.580 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.580 [2024-07-24 19:00:16.017649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.580 [2024-07-24 19:00:16.137056] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.580 [2024-07-24 19:00:16.137130] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.580 [2024-07-24 19:00:16.137147] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.580 [2024-07-24 19:00:16.137162] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.580 [2024-07-24 19:00:16.137173] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.580 [2024-07-24 19:00:16.137215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.844 19:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:38.844 19:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:38.844 19:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:38.844 19:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:38.844 19:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.844 19:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.844 19:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.864rhk5WtD 00:17:38.844 19:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.864rhk5WtD 00:17:38.844 19:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:39.101 [2024-07-24 19:00:16.523380] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.101 19:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:39.358 19:00:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:39.616 [2024-07-24 19:00:16.996625] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:39.616 [2024-07-24 19:00:16.996854] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.616 19:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:39.874 malloc0 00:17:39.874 19:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:40.130 19:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.864rhk5WtD 00:17:40.388 [2024-07-24 19:00:17.755377] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:40.388 19:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3173025 00:17:40.388 19:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:40.388 19:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:40.388 19:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3173025 /var/tmp/bdevperf.sock 00:17:40.388 19:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3173025 ']' 00:17:40.388 19:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.388 19:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:40.388 19:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.388 19:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:40.388 19:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:40.388 [2024-07-24 19:00:17.819391] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:17:40.388 [2024-07-24 19:00:17.819464] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3173025 ] 00:17:40.388 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.388 [2024-07-24 19:00:17.878303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.388 [2024-07-24 19:00:17.986715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.658 19:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:40.658 19:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:40.658 19:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.864rhk5WtD 00:17:40.922 19:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:41.179 [2024-07-24 19:00:18.593439] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:41.179 nvme0n1 00:17:41.179 19:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:41.436 Running I/O for 1 seconds... 00:17:42.368 00:17:42.368 Latency(us) 00:17:42.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.368 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:42.368 Verification LBA range: start 0x0 length 0x2000 00:17:42.368 nvme0n1 : 1.04 2757.33 10.77 0.00 0.00 45593.35 10679.94 78060.66 00:17:42.368 =================================================================================================================== 00:17:42.368 Total : 2757.33 10.77 0.00 0.00 45593.35 10679.94 78060.66 00:17:42.368 0 00:17:42.368 19:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 3173025 00:17:42.368 19:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3173025 ']' 00:17:42.368 19:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3173025 00:17:42.368 19:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:42.368 19:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:42.368 19:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3173025 00:17:42.368 19:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:42.368 19:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:42.368 19:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3173025' 00:17:42.368 killing process with pid 3173025 00:17:42.368 19:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3173025 00:17:42.368 Received shutdown signal, test time was about 1.000000 seconds 00:17:42.368 00:17:42.368 Latency(us) 00:17:42.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.368 =================================================================================================================== 00:17:42.368 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:42.368 19:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3173025 00:17:42.662 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 3172740 00:17:42.662 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3172740 ']' 00:17:42.662 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3172740 00:17:42.662 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:42.662 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:42.662 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3172740 00:17:42.662 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:42.662 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:42.662 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3172740' 00:17:42.662 killing process with pid 3172740 00:17:42.662 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3172740 00:17:42.662 [2024-07-24 19:00:20.205750] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:42.662 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3172740 00:17:42.926 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:17:42.926 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:42.926 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:42.926 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.926 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3173309 00:17:42.926 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:42.926 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3173309 00:17:42.926 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3173309 ']' 00:17:42.926 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.926 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:42.926 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.926 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:42.926 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.186 [2024-07-24 19:00:20.563617] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:17:43.186 [2024-07-24 19:00:20.563708] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.186 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.186 [2024-07-24 19:00:20.631526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.186 [2024-07-24 19:00:20.751055] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.186 [2024-07-24 19:00:20.751137] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.186 [2024-07-24 19:00:20.751155] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.186 [2024-07-24 19:00:20.751170] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.186 [2024-07-24 19:00:20.751182] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.186 [2024-07-24 19:00:20.751212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.444 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:43.444 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:43.444 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:43.444 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:43.444 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.444 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.444 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:17:43.444 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.444 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.444 [2024-07-24 19:00:20.903835] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.444 malloc0 00:17:43.444 [2024-07-24 19:00:20.937029] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:43.444 [2024-07-24 19:00:20.947334] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.444 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.444 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=3173453 00:17:43.444 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:43.444 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 3173453 /var/tmp/bdevperf.sock 00:17:43.444 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3173453 ']' 00:17:43.444 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:43.444 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:43.444 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:43.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:43.444 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:43.444 19:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.444 [2024-07-24 19:00:21.014260] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:17:43.444 [2024-07-24 19:00:21.014338] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3173453 ] 00:17:43.444 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.702 [2024-07-24 19:00:21.075578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.702 [2024-07-24 19:00:21.192637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.959 19:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:43.959 19:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:43.959 19:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.864rhk5WtD 00:17:44.216 19:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:44.474 [2024-07-24 19:00:21.849334] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:44.474 nvme0n1 00:17:44.474 19:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:44.474 Running I/O for 1 seconds... 00:17:45.849 00:17:45.849 Latency(us) 00:17:45.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.849 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:45.849 Verification LBA range: start 0x0 length 0x2000 00:17:45.849 nvme0n1 : 1.05 2693.80 10.52 0.00 0.00 46542.86 10922.67 73011.96 00:17:45.849 =================================================================================================================== 00:17:45.849 Total : 2693.80 10.52 0.00 0.00 46542.86 10922.67 73011.96 00:17:45.849 0 00:17:45.849 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:17:45.849 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.849 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.849 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.849 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:17:45.849 "subsystems": [ 00:17:45.849 { 00:17:45.849 "subsystem": "keyring", 00:17:45.849 "config": [ 00:17:45.849 { 00:17:45.849 "method": "keyring_file_add_key", 00:17:45.849 "params": { 00:17:45.849 "name": "key0", 00:17:45.849 "path": "/tmp/tmp.864rhk5WtD" 00:17:45.849 } 00:17:45.849 } 00:17:45.849 ] 00:17:45.849 }, 00:17:45.849 { 00:17:45.849 "subsystem": "iobuf", 00:17:45.849 "config": [ 00:17:45.849 { 00:17:45.849 "method": "iobuf_set_options", 00:17:45.849 "params": { 00:17:45.849 "small_pool_count": 8192, 00:17:45.849 "large_pool_count": 1024, 00:17:45.849 "small_bufsize": 8192, 00:17:45.849 "large_bufsize": 135168 00:17:45.849 } 00:17:45.849 } 00:17:45.849 ] 00:17:45.849 }, 00:17:45.849 { 00:17:45.849 "subsystem": "sock", 00:17:45.849 "config": [ 00:17:45.849 { 00:17:45.849 "method": "sock_set_default_impl", 00:17:45.849 "params": { 00:17:45.849 "impl_name": "posix" 00:17:45.849 } 00:17:45.849 }, 00:17:45.849 { 00:17:45.849 "method": "sock_impl_set_options", 00:17:45.849 "params": { 00:17:45.849 "impl_name": "ssl", 00:17:45.849 "recv_buf_size": 4096, 00:17:45.849 "send_buf_size": 4096, 00:17:45.849 "enable_recv_pipe": true, 00:17:45.849 "enable_quickack": false, 00:17:45.849 "enable_placement_id": 0, 00:17:45.849 "enable_zerocopy_send_server": true, 00:17:45.849 "enable_zerocopy_send_client": false, 00:17:45.849 "zerocopy_threshold": 0, 00:17:45.849 "tls_version": 0, 00:17:45.850 "enable_ktls": false 00:17:45.850 } 00:17:45.850 }, 00:17:45.850 { 00:17:45.850 "method": "sock_impl_set_options", 00:17:45.850 "params": { 00:17:45.850 "impl_name": "posix", 00:17:45.850 "recv_buf_size": 2097152, 00:17:45.850 "send_buf_size": 2097152, 00:17:45.850 "enable_recv_pipe": true, 00:17:45.850 "enable_quickack": false, 00:17:45.850 "enable_placement_id": 0, 00:17:45.850 "enable_zerocopy_send_server": true, 00:17:45.850 "enable_zerocopy_send_client": false, 00:17:45.850 "zerocopy_threshold": 0, 00:17:45.850 "tls_version": 0, 00:17:45.850 "enable_ktls": false 00:17:45.850 } 00:17:45.850 } 00:17:45.850 ] 00:17:45.850 }, 00:17:45.850 { 00:17:45.850 "subsystem": "vmd", 00:17:45.850 "config": [] 00:17:45.850 }, 00:17:45.850 { 00:17:45.850 "subsystem": "accel", 00:17:45.850 "config": [ 00:17:45.850 { 00:17:45.850 "method": "accel_set_options", 00:17:45.850 "params": { 00:17:45.850 "small_cache_size": 128, 00:17:45.850 "large_cache_size": 16, 00:17:45.850 "task_count": 2048, 00:17:45.850 "sequence_count": 2048, 00:17:45.850 "buf_count": 2048 00:17:45.850 } 00:17:45.850 } 00:17:45.850 ] 00:17:45.850 }, 00:17:45.850 { 00:17:45.850 "subsystem": "bdev", 00:17:45.850 "config": [ 00:17:45.850 { 00:17:45.850 "method": "bdev_set_options", 00:17:45.850 "params": { 00:17:45.850 "bdev_io_pool_size": 65535, 00:17:45.850 "bdev_io_cache_size": 256, 00:17:45.850 "bdev_auto_examine": true, 00:17:45.850 "iobuf_small_cache_size": 128, 00:17:45.850 "iobuf_large_cache_size": 16 00:17:45.850 } 00:17:45.850 }, 00:17:45.850 { 00:17:45.850 "method": "bdev_raid_set_options", 00:17:45.850 "params": { 00:17:45.850 "process_window_size_kb": 1024, 00:17:45.850 "process_max_bandwidth_mb_sec": 0 00:17:45.850 } 00:17:45.850 }, 00:17:45.850 { 00:17:45.850 "method": "bdev_iscsi_set_options", 00:17:45.850 "params": { 00:17:45.850 "timeout_sec": 30 00:17:45.850 } 00:17:45.850 }, 00:17:45.850 { 00:17:45.850 "method": "bdev_nvme_set_options", 00:17:45.850 "params": { 00:17:45.850 "action_on_timeout": "none", 00:17:45.850 "timeout_us": 0, 00:17:45.850 "timeout_admin_us": 0, 00:17:45.850 "keep_alive_timeout_ms": 10000, 00:17:45.850 "arbitration_burst": 0, 00:17:45.850 "low_priority_weight": 0, 00:17:45.850 "medium_priority_weight": 0, 00:17:45.850 "high_priority_weight": 0, 00:17:45.850 "nvme_adminq_poll_period_us": 10000, 00:17:45.850 "nvme_ioq_poll_period_us": 0, 00:17:45.850 "io_queue_requests": 0, 00:17:45.850 "delay_cmd_submit": true, 00:17:45.850 "transport_retry_count": 4, 00:17:45.850 "bdev_retry_count": 3, 00:17:45.850 "transport_ack_timeout": 0, 00:17:45.850 "ctrlr_loss_timeout_sec": 0, 00:17:45.850 "reconnect_delay_sec": 0, 00:17:45.850 "fast_io_fail_timeout_sec": 0, 00:17:45.850 "disable_auto_failback": false, 00:17:45.850 "generate_uuids": false, 00:17:45.850 "transport_tos": 0, 00:17:45.850 "nvme_error_stat": false, 00:17:45.850 "rdma_srq_size": 0, 00:17:45.850 "io_path_stat": false, 00:17:45.850 "allow_accel_sequence": false, 00:17:45.850 "rdma_max_cq_size": 0, 00:17:45.850 "rdma_cm_event_timeout_ms": 0, 00:17:45.850 "dhchap_digests": [ 00:17:45.850 "sha256", 00:17:45.850 "sha384", 00:17:45.850 "sha512" 00:17:45.850 ], 00:17:45.850 "dhchap_dhgroups": [ 00:17:45.850 "null", 00:17:45.850 "ffdhe2048", 00:17:45.850 "ffdhe3072", 00:17:45.850 "ffdhe4096", 00:17:45.850 "ffdhe6144", 00:17:45.850 "ffdhe8192" 00:17:45.850 ] 00:17:45.850 } 00:17:45.850 }, 00:17:45.850 { 00:17:45.850 "method": "bdev_nvme_set_hotplug", 00:17:45.850 "params": { 00:17:45.850 "period_us": 100000, 00:17:45.850 "enable": false 00:17:45.850 } 00:17:45.850 }, 00:17:45.850 { 00:17:45.850 "method": "bdev_malloc_create", 00:17:45.850 "params": { 00:17:45.850 "name": "malloc0", 00:17:45.850 "num_blocks": 8192, 00:17:45.850 "block_size": 4096, 00:17:45.850 "physical_block_size": 4096, 00:17:45.850 "uuid": "a602c412-4ffc-4154-ac28-7fcabccc1954", 00:17:45.850 "optimal_io_boundary": 0, 00:17:45.850 "md_size": 0, 00:17:45.850 "dif_type": 0, 00:17:45.850 "dif_is_head_of_md": false, 00:17:45.850 "dif_pi_format": 0 00:17:45.850 } 00:17:45.850 }, 00:17:45.850 { 00:17:45.850 "method": "bdev_wait_for_examine" 00:17:45.850 } 00:17:45.850 ] 00:17:45.850 }, 00:17:45.850 { 00:17:45.850 "subsystem": "nbd", 00:17:45.850 "config": [] 00:17:45.850 }, 00:17:45.850 { 00:17:45.850 "subsystem": "scheduler", 00:17:45.850 "config": [ 00:17:45.850 { 00:17:45.850 "method": "framework_set_scheduler", 00:17:45.850 "params": { 00:17:45.850 "name": "static" 00:17:45.850 } 00:17:45.850 } 00:17:45.850 ] 00:17:45.850 }, 00:17:45.850 { 00:17:45.850 "subsystem": "nvmf", 00:17:45.850 "config": [ 00:17:45.850 { 00:17:45.850 "method": "nvmf_set_config", 00:17:45.850 "params": { 00:17:45.850 "discovery_filter": "match_any", 00:17:45.850 "admin_cmd_passthru": { 00:17:45.850 "identify_ctrlr": false 00:17:45.850 } 00:17:45.850 } 00:17:45.850 }, 00:17:45.850 { 00:17:45.850 "method": "nvmf_set_max_subsystems", 00:17:45.850 "params": { 00:17:45.850 "max_subsystems": 1024 00:17:45.850 } 00:17:45.850 }, 00:17:45.850 { 00:17:45.850 "method": "nvmf_set_crdt", 00:17:45.850 "params": { 00:17:45.850 "crdt1": 0, 00:17:45.850 "crdt2": 0, 00:17:45.850 "crdt3": 0 00:17:45.850 } 00:17:45.850 }, 00:17:45.850 { 00:17:45.850 "method": "nvmf_create_transport", 00:17:45.850 "params": { 00:17:45.850 "trtype": "TCP", 00:17:45.850 "max_queue_depth": 128, 00:17:45.850 "max_io_qpairs_per_ctrlr": 127, 00:17:45.850 "in_capsule_data_size": 4096, 00:17:45.850 "max_io_size": 131072, 00:17:45.850 "io_unit_size": 131072, 00:17:45.850 "max_aq_depth": 128, 00:17:45.850 "num_shared_buffers": 511, 00:17:45.850 "buf_cache_size": 4294967295, 00:17:45.850 "dif_insert_or_strip": false, 00:17:45.850 "zcopy": false, 00:17:45.850 "c2h_success": false, 00:17:45.850 "sock_priority": 0, 00:17:45.850 "abort_timeout_sec": 1, 00:17:45.850 "ack_timeout": 0, 00:17:45.850 "data_wr_pool_size": 0 00:17:45.850 } 00:17:45.850 }, 00:17:45.850 { 00:17:45.850 "method": "nvmf_create_subsystem", 00:17:45.850 "params": { 00:17:45.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.850 "allow_any_host": false, 00:17:45.850 "serial_number": "00000000000000000000", 00:17:45.850 "model_number": "SPDK bdev Controller", 00:17:45.850 "max_namespaces": 32, 00:17:45.850 "min_cntlid": 1, 00:17:45.850 "max_cntlid": 65519, 00:17:45.850 "ana_reporting": false 00:17:45.850 } 00:17:45.850 }, 00:17:45.850 { 00:17:45.850 "method": "nvmf_subsystem_add_host", 00:17:45.850 "params": { 00:17:45.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.850 "host": "nqn.2016-06.io.spdk:host1", 00:17:45.850 "psk": "key0" 00:17:45.850 } 00:17:45.850 }, 00:17:45.850 { 00:17:45.850 "method": "nvmf_subsystem_add_ns", 00:17:45.850 "params": { 00:17:45.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.850 "namespace": { 00:17:45.850 "nsid": 1, 00:17:45.850 "bdev_name": "malloc0", 00:17:45.850 "nguid": "A602C4124FFC4154AC287FCABCCC1954", 00:17:45.850 "uuid": "a602c412-4ffc-4154-ac28-7fcabccc1954", 00:17:45.850 "no_auto_visible": false 00:17:45.850 } 00:17:45.850 } 00:17:45.850 }, 00:17:45.850 { 00:17:45.850 "method": "nvmf_subsystem_add_listener", 00:17:45.850 "params": { 00:17:45.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.850 "listen_address": { 00:17:45.850 "trtype": "TCP", 00:17:45.850 "adrfam": "IPv4", 00:17:45.850 "traddr": "10.0.0.2", 00:17:45.850 "trsvcid": "4420" 00:17:45.850 }, 00:17:45.850 "secure_channel": false, 00:17:45.850 "sock_impl": "ssl" 00:17:45.850 } 00:17:45.850 } 00:17:45.850 ] 00:17:45.850 } 00:17:45.850 ] 00:17:45.850 }' 00:17:45.850 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:46.108 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:17:46.108 "subsystems": [ 00:17:46.108 { 00:17:46.108 "subsystem": "keyring", 00:17:46.108 "config": [ 00:17:46.108 { 00:17:46.108 "method": "keyring_file_add_key", 00:17:46.108 "params": { 00:17:46.108 "name": "key0", 00:17:46.108 "path": "/tmp/tmp.864rhk5WtD" 00:17:46.108 } 00:17:46.108 } 00:17:46.108 ] 00:17:46.108 }, 00:17:46.108 { 00:17:46.108 "subsystem": "iobuf", 00:17:46.108 "config": [ 00:17:46.108 { 00:17:46.108 "method": "iobuf_set_options", 00:17:46.108 "params": { 00:17:46.108 "small_pool_count": 8192, 00:17:46.108 "large_pool_count": 1024, 00:17:46.108 "small_bufsize": 8192, 00:17:46.108 "large_bufsize": 135168 00:17:46.108 } 00:17:46.108 } 00:17:46.108 ] 00:17:46.108 }, 00:17:46.108 { 00:17:46.108 "subsystem": "sock", 00:17:46.108 "config": [ 00:17:46.108 { 00:17:46.108 "method": "sock_set_default_impl", 00:17:46.108 "params": { 00:17:46.108 "impl_name": "posix" 00:17:46.108 } 00:17:46.108 }, 00:17:46.108 { 00:17:46.108 "method": "sock_impl_set_options", 00:17:46.108 "params": { 00:17:46.108 "impl_name": "ssl", 00:17:46.108 "recv_buf_size": 4096, 00:17:46.108 "send_buf_size": 4096, 00:17:46.108 "enable_recv_pipe": true, 00:17:46.108 "enable_quickack": false, 00:17:46.108 "enable_placement_id": 0, 00:17:46.108 "enable_zerocopy_send_server": true, 00:17:46.108 "enable_zerocopy_send_client": false, 00:17:46.108 "zerocopy_threshold": 0, 00:17:46.108 "tls_version": 0, 00:17:46.108 "enable_ktls": false 00:17:46.108 } 00:17:46.108 }, 00:17:46.108 { 00:17:46.108 "method": "sock_impl_set_options", 00:17:46.108 "params": { 00:17:46.108 "impl_name": "posix", 00:17:46.108 "recv_buf_size": 2097152, 00:17:46.108 "send_buf_size": 2097152, 00:17:46.108 "enable_recv_pipe": true, 00:17:46.108 "enable_quickack": false, 00:17:46.108 "enable_placement_id": 0, 00:17:46.108 "enable_zerocopy_send_server": true, 00:17:46.108 "enable_zerocopy_send_client": false, 00:17:46.108 "zerocopy_threshold": 0, 00:17:46.108 "tls_version": 0, 00:17:46.108 "enable_ktls": false 00:17:46.108 } 00:17:46.108 } 00:17:46.108 ] 00:17:46.108 }, 00:17:46.108 { 00:17:46.108 "subsystem": "vmd", 00:17:46.108 "config": [] 00:17:46.108 }, 00:17:46.108 { 00:17:46.108 "subsystem": "accel", 00:17:46.108 "config": [ 00:17:46.108 { 00:17:46.108 "method": "accel_set_options", 00:17:46.108 "params": { 00:17:46.108 "small_cache_size": 128, 00:17:46.108 "large_cache_size": 16, 00:17:46.108 "task_count": 2048, 00:17:46.108 "sequence_count": 2048, 00:17:46.108 "buf_count": 2048 00:17:46.108 } 00:17:46.108 } 00:17:46.108 ] 00:17:46.108 }, 00:17:46.108 { 00:17:46.108 "subsystem": "bdev", 00:17:46.108 "config": [ 00:17:46.108 { 00:17:46.108 "method": "bdev_set_options", 00:17:46.108 "params": { 00:17:46.108 "bdev_io_pool_size": 65535, 00:17:46.108 "bdev_io_cache_size": 256, 00:17:46.108 "bdev_auto_examine": true, 00:17:46.108 "iobuf_small_cache_size": 128, 00:17:46.108 "iobuf_large_cache_size": 16 00:17:46.108 } 00:17:46.108 }, 00:17:46.108 { 00:17:46.108 "method": "bdev_raid_set_options", 00:17:46.108 "params": { 00:17:46.108 "process_window_size_kb": 1024, 00:17:46.108 "process_max_bandwidth_mb_sec": 0 00:17:46.108 } 00:17:46.108 }, 00:17:46.108 { 00:17:46.108 "method": "bdev_iscsi_set_options", 00:17:46.108 "params": { 00:17:46.108 "timeout_sec": 30 00:17:46.108 } 00:17:46.108 }, 00:17:46.108 { 00:17:46.108 "method": "bdev_nvme_set_options", 00:17:46.108 "params": { 00:17:46.108 "action_on_timeout": "none", 00:17:46.108 "timeout_us": 0, 00:17:46.108 "timeout_admin_us": 0, 00:17:46.108 "keep_alive_timeout_ms": 10000, 00:17:46.108 "arbitration_burst": 0, 00:17:46.108 "low_priority_weight": 0, 00:17:46.108 "medium_priority_weight": 0, 00:17:46.108 "high_priority_weight": 0, 00:17:46.108 "nvme_adminq_poll_period_us": 10000, 00:17:46.108 "nvme_ioq_poll_period_us": 0, 00:17:46.108 "io_queue_requests": 512, 00:17:46.108 "delay_cmd_submit": true, 00:17:46.108 "transport_retry_count": 4, 00:17:46.108 "bdev_retry_count": 3, 00:17:46.108 "transport_ack_timeout": 0, 00:17:46.108 "ctrlr_loss_timeout_sec": 0, 00:17:46.108 "reconnect_delay_sec": 0, 00:17:46.108 "fast_io_fail_timeout_sec": 0, 00:17:46.108 "disable_auto_failback": false, 00:17:46.108 "generate_uuids": false, 00:17:46.108 "transport_tos": 0, 00:17:46.108 "nvme_error_stat": false, 00:17:46.108 "rdma_srq_size": 0, 00:17:46.108 "io_path_stat": false, 00:17:46.108 "allow_accel_sequence": false, 00:17:46.108 "rdma_max_cq_size": 0, 00:17:46.108 "rdma_cm_event_timeout_ms": 0, 00:17:46.108 "dhchap_digests": [ 00:17:46.108 "sha256", 00:17:46.108 "sha384", 00:17:46.108 "sha512" 00:17:46.108 ], 00:17:46.108 "dhchap_dhgroups": [ 00:17:46.108 "null", 00:17:46.108 "ffdhe2048", 00:17:46.108 "ffdhe3072", 00:17:46.108 "ffdhe4096", 00:17:46.108 "ffdhe6144", 00:17:46.108 "ffdhe8192" 00:17:46.108 ] 00:17:46.108 } 00:17:46.108 }, 00:17:46.108 { 00:17:46.108 "method": "bdev_nvme_attach_controller", 00:17:46.108 "params": { 00:17:46.108 "name": "nvme0", 00:17:46.108 "trtype": "TCP", 00:17:46.108 "adrfam": "IPv4", 00:17:46.108 "traddr": "10.0.0.2", 00:17:46.108 "trsvcid": "4420", 00:17:46.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.108 "prchk_reftag": false, 00:17:46.108 "prchk_guard": false, 00:17:46.108 "ctrlr_loss_timeout_sec": 0, 00:17:46.108 "reconnect_delay_sec": 0, 00:17:46.108 "fast_io_fail_timeout_sec": 0, 00:17:46.108 "psk": "key0", 00:17:46.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:46.108 "hdgst": false, 00:17:46.108 "ddgst": false 00:17:46.108 } 00:17:46.108 }, 00:17:46.108 { 00:17:46.108 "method": "bdev_nvme_set_hotplug", 00:17:46.108 "params": { 00:17:46.108 "period_us": 100000, 00:17:46.108 "enable": false 00:17:46.108 } 00:17:46.108 }, 00:17:46.108 { 00:17:46.108 "method": "bdev_enable_histogram", 00:17:46.108 "params": { 00:17:46.108 "name": "nvme0n1", 00:17:46.108 "enable": true 00:17:46.108 } 00:17:46.108 }, 00:17:46.108 { 00:17:46.108 "method": "bdev_wait_for_examine" 00:17:46.108 } 00:17:46.108 ] 00:17:46.108 }, 00:17:46.108 { 00:17:46.108 "subsystem": "nbd", 00:17:46.108 "config": [] 00:17:46.108 } 00:17:46.108 ] 00:17:46.108 }' 00:17:46.109 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 3173453 00:17:46.109 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3173453 ']' 00:17:46.109 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3173453 00:17:46.109 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:46.109 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:46.109 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3173453 00:17:46.109 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:46.109 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:46.109 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3173453' 00:17:46.109 killing process with pid 3173453 00:17:46.109 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3173453 00:17:46.109 Received shutdown signal, test time was about 1.000000 seconds 00:17:46.109 00:17:46.109 Latency(us) 00:17:46.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.109 =================================================================================================================== 00:17:46.109 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:46.109 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3173453 00:17:46.366 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 3173309 00:17:46.366 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3173309 ']' 00:17:46.366 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3173309 00:17:46.366 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:46.366 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:46.366 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3173309 00:17:46.366 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:46.366 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:46.366 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3173309' 00:17:46.366 killing process with pid 3173309 00:17:46.366 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3173309 00:17:46.366 19:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3173309 00:17:46.624 19:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:17:46.624 19:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:46.624 19:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:17:46.624 "subsystems": [ 00:17:46.624 { 00:17:46.624 "subsystem": "keyring", 00:17:46.624 "config": [ 00:17:46.624 { 00:17:46.624 "method": "keyring_file_add_key", 00:17:46.624 "params": { 00:17:46.624 "name": "key0", 00:17:46.624 "path": "/tmp/tmp.864rhk5WtD" 00:17:46.624 } 00:17:46.624 } 00:17:46.624 ] 00:17:46.624 }, 00:17:46.624 { 00:17:46.624 "subsystem": "iobuf", 00:17:46.624 "config": [ 00:17:46.624 { 00:17:46.624 "method": "iobuf_set_options", 00:17:46.624 "params": { 00:17:46.624 "small_pool_count": 8192, 00:17:46.624 "large_pool_count": 1024, 00:17:46.624 "small_bufsize": 8192, 00:17:46.624 "large_bufsize": 135168 00:17:46.624 } 00:17:46.624 } 00:17:46.624 ] 00:17:46.624 }, 00:17:46.624 { 00:17:46.624 "subsystem": "sock", 00:17:46.624 "config": [ 00:17:46.624 { 00:17:46.624 "method": "sock_set_default_impl", 00:17:46.624 "params": { 00:17:46.624 "impl_name": "posix" 00:17:46.624 } 00:17:46.624 }, 00:17:46.624 { 00:17:46.624 "method": "sock_impl_set_options", 00:17:46.624 "params": { 00:17:46.624 "impl_name": "ssl", 00:17:46.624 "recv_buf_size": 4096, 00:17:46.624 "send_buf_size": 4096, 00:17:46.624 "enable_recv_pipe": true, 00:17:46.624 "enable_quickack": false, 00:17:46.624 "enable_placement_id": 0, 00:17:46.624 "enable_zerocopy_send_server": true, 00:17:46.624 "enable_zerocopy_send_client": false, 00:17:46.624 "zerocopy_threshold": 0, 00:17:46.624 "tls_version": 0, 00:17:46.624 "enable_ktls": false 00:17:46.624 } 00:17:46.624 }, 00:17:46.624 { 00:17:46.624 "method": "sock_impl_set_options", 00:17:46.624 "params": { 00:17:46.624 "impl_name": "posix", 00:17:46.624 "recv_buf_size": 2097152, 00:17:46.624 "send_buf_size": 2097152, 00:17:46.624 "enable_recv_pipe": true, 00:17:46.624 "enable_quickack": false, 00:17:46.624 "enable_placement_id": 0, 00:17:46.624 "enable_zerocopy_send_server": true, 00:17:46.624 "enable_zerocopy_send_client": false, 00:17:46.624 "zerocopy_threshold": 0, 00:17:46.624 "tls_version": 0, 00:17:46.624 "enable_ktls": false 00:17:46.624 } 00:17:46.624 } 00:17:46.624 ] 00:17:46.624 }, 00:17:46.624 { 00:17:46.624 "subsystem": "vmd", 00:17:46.624 "config": [] 00:17:46.624 }, 00:17:46.624 { 00:17:46.624 "subsystem": "accel", 00:17:46.624 "config": [ 00:17:46.624 { 00:17:46.624 "method": "accel_set_options", 00:17:46.624 "params": { 00:17:46.624 "small_cache_size": 128, 00:17:46.624 "large_cache_size": 16, 00:17:46.624 "task_count": 2048, 00:17:46.624 "sequence_count": 2048, 00:17:46.624 "buf_count": 2048 00:17:46.624 } 00:17:46.624 } 00:17:46.624 ] 00:17:46.624 }, 00:17:46.624 { 00:17:46.624 "subsystem": "bdev", 00:17:46.624 "config": [ 00:17:46.624 { 00:17:46.624 "method": "bdev_set_options", 00:17:46.624 "params": { 00:17:46.624 "bdev_io_pool_size": 65535, 00:17:46.624 "bdev_io_cache_size": 256, 00:17:46.624 "bdev_auto_examine": true, 00:17:46.624 "iobuf_small_cache_size": 128, 00:17:46.624 "iobuf_large_cache_size": 16 00:17:46.624 } 00:17:46.624 }, 00:17:46.624 { 00:17:46.624 "method": "bdev_raid_set_options", 00:17:46.624 "params": { 00:17:46.624 "process_window_size_kb": 1024, 00:17:46.624 "process_max_bandwidth_mb_sec": 0 00:17:46.624 } 00:17:46.624 }, 00:17:46.624 { 00:17:46.624 "method": "bdev_iscsi_set_options", 00:17:46.624 "params": { 00:17:46.624 "timeout_sec": 30 00:17:46.624 } 00:17:46.624 }, 00:17:46.624 { 00:17:46.624 "method": "bdev_nvme_set_options", 00:17:46.624 "params": { 00:17:46.624 "action_on_timeout": "none", 00:17:46.624 "timeout_us": 0, 00:17:46.624 "timeout_admin_us": 0, 00:17:46.624 "keep_alive_timeout_ms": 10000, 00:17:46.624 "arbitration_burst": 0, 00:17:46.624 "low_priority_weight": 0, 00:17:46.624 "medium_priority_weight": 0, 00:17:46.624 "high_priority_weight": 0, 00:17:46.624 "nvme_adminq_poll_period_us": 10000, 00:17:46.624 "nvme_ioq_poll_period_us": 0, 00:17:46.624 "io_queue_requests": 0, 00:17:46.624 "delay_cmd_submit": true, 00:17:46.624 "transport_retry_count": 4, 00:17:46.624 "bdev_retry_count": 3, 00:17:46.624 "transport_ack_timeout": 0, 00:17:46.624 "ctrlr_loss_timeout_sec": 0, 00:17:46.624 "reconnect_delay_sec": 0, 00:17:46.624 "fast_io_fail_timeout_sec": 0, 00:17:46.624 "disable_auto_failback": false, 00:17:46.624 "generate_uuids": false, 00:17:46.624 "transport_tos": 0, 00:17:46.624 "nvme_error_stat": false, 00:17:46.624 "rdma_srq_size": 0, 00:17:46.624 "io_path_stat": false, 00:17:46.624 "allow_accel_sequence": false, 00:17:46.624 "rdma_max_cq_size": 0, 00:17:46.624 "rdma_cm_event_timeout_ms": 0, 00:17:46.624 "dhchap_digests": [ 00:17:46.624 "sha256", 00:17:46.624 "sha384", 00:17:46.624 "sha512" 00:17:46.624 ], 00:17:46.624 "dhchap_dhgroups": [ 00:17:46.624 "null", 00:17:46.624 "ffdhe2048", 00:17:46.624 "ffdhe3072", 00:17:46.624 "ffdhe4096", 00:17:46.624 "ffdhe6144", 00:17:46.624 "ffdhe8192" 00:17:46.624 ] 00:17:46.624 } 00:17:46.624 }, 00:17:46.624 { 00:17:46.624 "method": "bdev_nvme_set_hotplug", 00:17:46.624 "params": { 00:17:46.624 "period_us": 100000, 00:17:46.624 "enable": false 00:17:46.624 } 00:17:46.624 }, 00:17:46.624 { 00:17:46.624 "method": "bdev_malloc_create", 00:17:46.624 "params": { 00:17:46.624 "name": "malloc0", 00:17:46.624 "num_blocks": 8192, 00:17:46.624 "block_size": 4096, 00:17:46.624 "physical_block_size": 4096, 00:17:46.624 "uuid": "a602c412-4ffc-4154-ac28-7fcabccc1954", 00:17:46.624 "optimal_io_boundary": 0, 00:17:46.624 "md_size": 0, 00:17:46.624 "dif_type": 0, 00:17:46.624 "dif_is_head_of_md": false, 00:17:46.625 "dif_pi_format": 0 00:17:46.625 } 00:17:46.625 }, 00:17:46.625 { 00:17:46.625 "method": "bdev_wait_for_examine" 00:17:46.625 } 00:17:46.625 ] 00:17:46.625 }, 00:17:46.625 { 00:17:46.625 "subsystem": "nbd", 00:17:46.625 "config": [] 00:17:46.625 }, 00:17:46.625 { 00:17:46.625 "subsystem": "scheduler", 00:17:46.625 "config": [ 00:17:46.625 { 00:17:46.625 "method": "framework_set_scheduler", 00:17:46.625 "params": { 00:17:46.625 "name": "static" 00:17:46.625 } 00:17:46.625 } 00:17:46.625 ] 00:17:46.625 }, 00:17:46.625 { 00:17:46.625 "subsystem": "nvmf", 00:17:46.625 "config": [ 00:17:46.625 { 00:17:46.625 "method": "nvmf_set_config", 00:17:46.625 "params": { 00:17:46.625 "discovery_filter": "match_any", 00:17:46.625 "admin_cmd_passthru": { 00:17:46.625 "identify_ctrlr": false 00:17:46.625 } 00:17:46.625 } 00:17:46.625 }, 00:17:46.625 { 00:17:46.625 "method": "nvmf_set_max_subsystems", 00:17:46.625 "params": { 00:17:46.625 "max_subsystems": 1024 00:17:46.625 } 00:17:46.625 }, 00:17:46.625 { 00:17:46.625 "method": "nvmf_set_crdt", 00:17:46.625 "params": { 00:17:46.625 "crdt1": 0, 00:17:46.625 "crdt2": 0, 00:17:46.625 "crdt3": 0 00:17:46.625 } 00:17:46.625 }, 00:17:46.625 { 00:17:46.625 "method": "nvmf_create_transport", 00:17:46.625 "params": { 00:17:46.625 "trtype": "TCP", 00:17:46.625 "max_queue_depth": 128, 00:17:46.625 "max_io_qpairs_per_ctrlr": 127, 00:17:46.625 "in_capsule_data_size": 4096, 00:17:46.625 "max_io_size": 131072, 00:17:46.625 "io_unit_size": 131072, 00:17:46.625 "max_aq_depth": 128, 00:17:46.625 "num_shared_buffers": 511, 00:17:46.625 "buf_cache_size": 4294967295, 00:17:46.625 "dif_insert_or_strip": false, 00:17:46.625 "zcopy": false, 00:17:46.625 "c2h_success": false, 00:17:46.625 "sock_priority": 0, 00:17:46.625 "abort_timeout_sec": 1, 00:17:46.625 "ack_timeout": 0, 00:17:46.625 "data_wr_pool_size": 0 00:17:46.625 } 00:17:46.625 }, 00:17:46.625 { 00:17:46.625 "method": "nvmf_create_subsystem", 00:17:46.625 "params": { 00:17:46.625 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.625 "allow_any_host": false, 00:17:46.625 "serial_number": "00000000000000000000", 00:17:46.625 "model_number": "SPDK bdev Controller", 00:17:46.625 "max_namespaces": 32, 00:17:46.625 "min_cntlid": 1, 00:17:46.625 "max_cntlid": 65519, 00:17:46.625 "ana_reporting": false 00:17:46.625 } 00:17:46.625 }, 00:17:46.625 { 00:17:46.625 "method": "nvmf_subsystem_add_host", 00:17:46.625 "params": { 00:17:46.625 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.625 "host": "nqn.2016-06.io.spdk:host1", 00:17:46.625 "psk": "key0" 00:17:46.625 } 00:17:46.625 }, 00:17:46.625 { 00:17:46.625 "method": "nvmf_subsystem_add_ns", 00:17:46.625 "params": { 00:17:46.625 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.625 "namespace": { 00:17:46.625 "nsid": 1, 00:17:46.625 "bdev_name": "malloc0", 00:17:46.625 "nguid": "A602C4124FFC4154AC287FCABCCC1954", 00:17:46.625 "uuid": "a602c412-4ffc-4154-ac28-7fcabccc1954", 00:17:46.625 "no_auto_visible": false 00:17:46.625 } 00:17:46.625 } 00:17:46.625 }, 00:17:46.625 { 00:17:46.625 "method": "nvmf_subsystem_add_listener", 00:17:46.625 "params": { 00:17:46.625 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.625 "listen_address": { 00:17:46.625 "trtype": "TCP", 00:17:46.625 "adrfam": "IPv4", 00:17:46.625 "traddr": "10.0.0.2", 00:17:46.625 "trsvcid": "4420" 00:17:46.625 }, 00:17:46.625 "secure_channel": false, 00:17:46.625 "sock_impl": "ssl" 00:17:46.625 } 00:17:46.625 } 00:17:46.625 ] 00:17:46.625 } 00:17:46.625 ] 00:17:46.625 }' 00:17:46.625 19:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:46.625 19:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.625 19:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3173778 00:17:46.625 19:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:46.625 19:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3173778 00:17:46.625 19:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3173778 ']' 00:17:46.625 19:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.625 19:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:46.625 19:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.625 19:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:46.625 19:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.625 [2024-07-24 19:00:24.210583] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:17:46.625 [2024-07-24 19:00:24.210669] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.883 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.884 [2024-07-24 19:00:24.280374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.884 [2024-07-24 19:00:24.396014] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.884 [2024-07-24 19:00:24.396081] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.884 [2024-07-24 19:00:24.396098] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.884 [2024-07-24 19:00:24.396120] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.884 [2024-07-24 19:00:24.396132] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.884 [2024-07-24 19:00:24.396227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.141 [2024-07-24 19:00:24.646970] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.141 [2024-07-24 19:00:24.689622] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:47.141 [2024-07-24 19:00:24.689856] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.711 19:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:47.712 19:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:47.712 19:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:47.712 19:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:47.712 19:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.712 19:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.712 19:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=3173897 00:17:47.712 19:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 3173897 /var/tmp/bdevperf.sock 00:17:47.712 19:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3173897 ']' 00:17:47.712 19:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:47.712 19:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:47.712 19:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:47.712 19:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:47.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:47.712 19:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:17:47.712 "subsystems": [ 00:17:47.712 { 00:17:47.712 "subsystem": "keyring", 00:17:47.712 "config": [ 00:17:47.712 { 00:17:47.712 "method": "keyring_file_add_key", 00:17:47.712 "params": { 00:17:47.712 "name": "key0", 00:17:47.712 "path": "/tmp/tmp.864rhk5WtD" 00:17:47.712 } 00:17:47.712 } 00:17:47.712 ] 00:17:47.712 }, 00:17:47.712 { 00:17:47.712 "subsystem": "iobuf", 00:17:47.712 "config": [ 00:17:47.712 { 00:17:47.712 "method": "iobuf_set_options", 00:17:47.712 "params": { 00:17:47.712 "small_pool_count": 8192, 00:17:47.712 "large_pool_count": 1024, 00:17:47.712 "small_bufsize": 8192, 00:17:47.712 "large_bufsize": 135168 00:17:47.712 } 00:17:47.712 } 00:17:47.712 ] 00:17:47.712 }, 00:17:47.712 { 00:17:47.712 "subsystem": "sock", 00:17:47.712 "config": [ 00:17:47.712 { 00:17:47.712 "method": "sock_set_default_impl", 00:17:47.712 "params": { 00:17:47.712 "impl_name": "posix" 00:17:47.712 } 00:17:47.712 }, 00:17:47.712 { 00:17:47.712 "method": "sock_impl_set_options", 00:17:47.712 "params": { 00:17:47.712 "impl_name": "ssl", 00:17:47.712 "recv_buf_size": 4096, 00:17:47.712 "send_buf_size": 4096, 00:17:47.712 "enable_recv_pipe": true, 00:17:47.712 "enable_quickack": false, 00:17:47.712 "enable_placement_id": 0, 00:17:47.712 "enable_zerocopy_send_server": true, 00:17:47.712 "enable_zerocopy_send_client": false, 00:17:47.712 "zerocopy_threshold": 0, 00:17:47.712 "tls_version": 0, 00:17:47.712 "enable_ktls": false 00:17:47.712 } 00:17:47.712 }, 00:17:47.712 { 00:17:47.712 "method": "sock_impl_set_options", 00:17:47.712 "params": { 00:17:47.712 "impl_name": "posix", 00:17:47.712 "recv_buf_size": 2097152, 00:17:47.712 "send_buf_size": 2097152, 00:17:47.712 "enable_recv_pipe": true, 00:17:47.712 "enable_quickack": false, 00:17:47.712 "enable_placement_id": 0, 00:17:47.712 "enable_zerocopy_send_server": true, 00:17:47.712 "enable_zerocopy_send_client": false, 00:17:47.712 "zerocopy_threshold": 0, 00:17:47.712 "tls_version": 0, 00:17:47.712 "enable_ktls": false 00:17:47.712 } 00:17:47.712 } 00:17:47.712 ] 00:17:47.712 }, 00:17:47.712 { 00:17:47.712 "subsystem": "vmd", 00:17:47.712 "config": [] 00:17:47.712 }, 00:17:47.712 { 00:17:47.712 "subsystem": "accel", 00:17:47.712 "config": [ 00:17:47.712 { 00:17:47.712 "method": "accel_set_options", 00:17:47.712 "params": { 00:17:47.712 "small_cache_size": 128, 00:17:47.712 "large_cache_size": 16, 00:17:47.712 "task_count": 2048, 00:17:47.712 "sequence_count": 2048, 00:17:47.712 "buf_count": 2048 00:17:47.712 } 00:17:47.712 } 00:17:47.712 ] 00:17:47.712 }, 00:17:47.712 { 00:17:47.712 "subsystem": "bdev", 00:17:47.712 "config": [ 00:17:47.712 { 00:17:47.712 "method": "bdev_set_options", 00:17:47.712 "params": { 00:17:47.712 "bdev_io_pool_size": 65535, 00:17:47.712 "bdev_io_cache_size": 256, 00:17:47.712 "bdev_auto_examine": true, 00:17:47.712 "iobuf_small_cache_size": 128, 00:17:47.712 "iobuf_large_cache_size": 16 00:17:47.712 } 00:17:47.712 }, 00:17:47.712 { 00:17:47.712 "method": "bdev_raid_set_options", 00:17:47.712 "params": { 00:17:47.712 "process_window_size_kb": 1024, 00:17:47.712 "process_max_bandwidth_mb_sec": 0 00:17:47.712 } 00:17:47.712 }, 00:17:47.712 { 00:17:47.712 "method": "bdev_iscsi_set_options", 00:17:47.712 "params": { 00:17:47.712 "timeout_sec": 30 00:17:47.712 } 00:17:47.712 }, 00:17:47.712 { 00:17:47.712 "method": "bdev_nvme_set_options", 00:17:47.712 "params": { 00:17:47.712 "action_on_timeout": "none", 00:17:47.712 "timeout_us": 0, 00:17:47.712 "timeout_admin_us": 0, 00:17:47.712 "keep_alive_timeout_ms": 10000, 00:17:47.712 "arbitration_burst": 0, 00:17:47.712 "low_priority_weight": 0, 00:17:47.712 "medium_priority_weight": 0, 00:17:47.712 "high_priority_weight": 0, 00:17:47.712 "nvme_adminq_poll_period_us": 10000, 00:17:47.712 "nvme_ioq_poll_period_us": 0, 00:17:47.712 "io_queue_requests": 512, 00:17:47.712 "delay_cmd_submit": true, 00:17:47.712 "transport_retry_count": 4, 00:17:47.712 "bdev_retry_count": 3, 00:17:47.712 "transport_ack_timeout": 0, 00:17:47.712 "ctrlr_loss_timeout_sec": 0, 00:17:47.712 "reconnect_delay_sec": 0, 00:17:47.712 "fast_io_fail_timeout_sec": 0, 00:17:47.712 "disable_auto_failback": false, 00:17:47.712 "generate_uuids": false, 00:17:47.712 "transport_tos": 0, 00:17:47.712 "nvme_error_stat": false, 00:17:47.712 "rdma_srq_size": 0, 00:17:47.712 "io_path_stat": false, 00:17:47.712 "allow_accel_sequence": false, 00:17:47.712 "rdma_max_cq_size": 0, 00:17:47.712 "rdma_cm_event_timeout_ms": 0, 00:17:47.712 "dhchap_digests": [ 00:17:47.712 "sha256", 00:17:47.712 "sha384", 00:17:47.712 "sha512" 00:17:47.712 ], 00:17:47.712 "dhchap_dhgroups": [ 00:17:47.712 "null", 00:17:47.712 "ffdhe2048", 00:17:47.712 "ffdhe3072", 00:17:47.712 "ffdhe4096", 00:17:47.712 "ffdhe6144", 00:17:47.712 "ffdhe8192" 00:17:47.712 ] 00:17:47.712 } 00:17:47.712 }, 00:17:47.712 { 00:17:47.712 "method": "bdev_nvme_attach_controller", 00:17:47.712 "params": { 00:17:47.712 "name": "nvme0", 00:17:47.712 "trtype": "TCP", 00:17:47.712 "adrfam": "IPv4", 00:17:47.712 "traddr": "10.0.0.2", 00:17:47.712 "trsvcid": "4420", 00:17:47.712 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.712 "prchk_reftag": false, 00:17:47.712 "prchk_guard": false, 00:17:47.712 "ctrlr_loss_timeout_sec": 0, 00:17:47.712 "reconnect_delay_sec": 0, 00:17:47.712 "fast_io_fail_timeout_sec": 0, 00:17:47.712 "psk": "key0", 00:17:47.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.712 "hdgst": false, 00:17:47.712 "ddgst": false 00:17:47.712 } 00:17:47.712 }, 00:17:47.712 { 00:17:47.712 "method": "bdev_nvme_set_hotplug", 00:17:47.712 "params": { 00:17:47.712 "period_us": 100000, 00:17:47.712 "enable": false 00:17:47.712 } 00:17:47.712 }, 00:17:47.712 { 00:17:47.712 "method": "bdev_enable_histogram", 00:17:47.712 "params": { 00:17:47.712 "name": "nvme0n1", 00:17:47.712 "enable": true 00:17:47.712 } 00:17:47.712 }, 00:17:47.712 { 00:17:47.712 "method": "bdev_wait_for_examine" 00:17:47.712 } 00:17:47.712 ] 00:17:47.712 }, 00:17:47.712 { 00:17:47.712 "subsystem": "nbd", 00:17:47.712 "config": [] 00:17:47.712 } 00:17:47.712 ] 00:17:47.712 }' 00:17:47.713 19:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:47.713 19:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.713 [2024-07-24 19:00:25.257694] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:17:47.713 [2024-07-24 19:00:25.257774] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3173897 ] 00:17:47.713 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.970 [2024-07-24 19:00:25.325216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.970 [2024-07-24 19:00:25.443076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.229 [2024-07-24 19:00:25.632219] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:48.793 19:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:48.793 19:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:48.793 19:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:48.793 19:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:17:49.056 19:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.056 19:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:49.056 Running I/O for 1 seconds... 00:17:50.427 00:17:50.427 Latency(us) 00:17:50.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.427 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:50.427 Verification LBA range: start 0x0 length 0x2000 00:17:50.427 nvme0n1 : 1.03 2687.91 10.50 0.00 0.00 47005.09 7039.05 73011.96 00:17:50.427 =================================================================================================================== 00:17:50.427 Total : 2687.91 10.50 0.00 0.00 47005.09 7039.05 73011.96 00:17:50.427 0 00:17:50.427 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:17:50.427 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:17:50.427 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:17:50.427 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:17:50.427 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:17:50.427 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:17:50.427 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:50.427 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:17:50.427 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:17:50.427 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:17:50.427 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:50.427 nvmf_trace.0 00:17:50.427 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:17:50.428 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3173897 00:17:50.428 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3173897 ']' 00:17:50.428 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3173897 00:17:50.428 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:50.428 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:50.428 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3173897 00:17:50.428 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:50.428 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:50.428 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3173897' 00:17:50.428 killing process with pid 3173897 00:17:50.428 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3173897 00:17:50.428 Received shutdown signal, test time was about 1.000000 seconds 00:17:50.428 00:17:50.428 Latency(us) 00:17:50.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.428 =================================================================================================================== 00:17:50.428 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.428 19:00:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3173897 00:17:50.428 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:17:50.428 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:50.428 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:17:50.428 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:50.428 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:17:50.428 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:50.428 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:50.428 rmmod nvme_tcp 00:17:50.686 rmmod nvme_fabrics 00:17:50.686 rmmod nvme_keyring 00:17:50.686 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:50.686 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:17:50.686 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:17:50.686 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3173778 ']' 00:17:50.686 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3173778 00:17:50.686 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3173778 ']' 00:17:50.686 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3173778 00:17:50.686 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:50.686 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:50.686 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3173778 00:17:50.686 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:50.686 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:50.686 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3173778' 00:17:50.686 killing process with pid 3173778 00:17:50.686 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3173778 00:17:50.686 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3173778 00:17:50.943 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:50.943 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:50.943 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:50.944 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:50.944 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:50.944 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.944 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:50.944 19:00:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.847 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:52.847 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.MOA5lfkFvl /tmp/tmp.Cae0jIXGYL /tmp/tmp.864rhk5WtD 00:17:52.848 00:17:52.848 real 1m22.499s 00:17:52.848 user 2m11.099s 00:17:52.848 sys 0m28.046s 00:17:52.848 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:52.848 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.848 ************************************ 00:17:52.848 END TEST nvmf_tls 00:17:52.848 ************************************ 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:53.106 ************************************ 00:17:53.106 START TEST nvmf_fips 00:17:53.106 ************************************ 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:53.106 * Looking for test storage... 00:17:53.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:17:53.106 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:17:53.107 Error setting digest 00:17:53.107 0032EA4C3B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:53.107 0032EA4C3B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:17:53.107 19:00:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:55.642 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:55.643 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:55.643 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:55.643 Found net devices under 0000:09:00.0: cvl_0_0 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:55.643 Found net devices under 0000:09:00.1: cvl_0_1 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:55.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:55.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:17:55.643 00:17:55.643 --- 10.0.0.2 ping statistics --- 00:17:55.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.643 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:55.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:55.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:17:55.643 00:17:55.643 --- 10.0.0.1 ping statistics --- 00:17:55.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.643 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3176256 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3176256 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3176256 ']' 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:55.643 19:00:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:55.643 [2024-07-24 19:00:32.918951] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:17:55.644 [2024-07-24 19:00:32.919046] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.644 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.644 [2024-07-24 19:00:32.987405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.644 [2024-07-24 19:00:33.102850] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:55.644 [2024-07-24 19:00:33.102913] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:55.644 [2024-07-24 19:00:33.102930] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:55.644 [2024-07-24 19:00:33.102943] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:55.644 [2024-07-24 19:00:33.102955] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:55.644 [2024-07-24 19:00:33.102986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.590 19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:56.590 19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:17:56.590 19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:56.590 19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:56.590 19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:56.590 19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.590 19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:17:56.590 19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:56.590 19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:56.590 19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:56.590 19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:56.590 19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:56.590 19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:56.590 19:00:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:56.590 [2024-07-24 19:00:34.072661] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.590 [2024-07-24 19:00:34.088659] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:56.590 [2024-07-24 19:00:34.088884] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.590 [2024-07-24 19:00:34.121523] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:56.590 malloc0 00:17:56.590 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:56.590 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3176411 00:17:56.590 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:56.590 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3176411 /var/tmp/bdevperf.sock 00:17:56.590 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3176411 ']' 00:17:56.590 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.590 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:56.590 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.590 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:56.590 19:00:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:56.848 [2024-07-24 19:00:34.213354] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:17:56.848 [2024-07-24 19:00:34.213434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3176411 ] 00:17:56.848 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.848 [2024-07-24 19:00:34.270063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.848 [2024-07-24 19:00:34.380618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.780 19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:57.780 19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:17:57.780 19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:58.038 [2024-07-24 19:00:35.455685] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.038 [2024-07-24 19:00:35.455819] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:58.038 TLSTESTn1 00:17:58.038 19:00:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:58.295 Running I/O for 10 seconds... 00:18:08.268 00:18:08.268 Latency(us) 00:18:08.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.268 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:08.268 Verification LBA range: start 0x0 length 0x2000 00:18:08.268 TLSTESTn1 : 10.03 1992.99 7.79 0.00 0.00 64102.04 6650.69 79225.74 00:18:08.268 =================================================================================================================== 00:18:08.268 Total : 1992.99 7.79 0.00 0.00 64102.04 6650.69 79225.74 00:18:08.268 0 00:18:08.268 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:08.268 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:08.268 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:18:08.268 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:18:08.268 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:18:08.268 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:08.268 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:18:08.268 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:18:08.268 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:18:08.268 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:08.268 nvmf_trace.0 00:18:08.268 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:18:08.268 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3176411 00:18:08.268 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3176411 ']' 00:18:08.268 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3176411 00:18:08.268 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:18:08.268 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:08.268 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3176411 00:18:08.268 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:08.268 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:08.268 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3176411' 00:18:08.268 killing process with pid 3176411 00:18:08.268 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3176411 00:18:08.268 Received shutdown signal, test time was about 10.000000 seconds 00:18:08.268 00:18:08.268 Latency(us) 00:18:08.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.268 =================================================================================================================== 00:18:08.268 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:08.268 [2024-07-24 19:00:45.847125] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:08.268 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3176411 00:18:08.569 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:08.569 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:08.569 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:08.569 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:08.569 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:08.569 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:08.569 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:08.569 rmmod nvme_tcp 00:18:08.569 rmmod nvme_fabrics 00:18:08.827 rmmod nvme_keyring 00:18:08.827 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:08.827 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:08.827 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:08.827 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3176256 ']' 00:18:08.827 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3176256 00:18:08.827 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3176256 ']' 00:18:08.827 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3176256 00:18:08.827 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:18:08.827 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:08.827 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3176256 00:18:08.827 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:08.827 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:08.827 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3176256' 00:18:08.827 killing process with pid 3176256 00:18:08.827 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3176256 00:18:08.827 [2024-07-24 19:00:46.200064] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:08.827 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3176256 00:18:09.083 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:09.083 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:09.083 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:09.083 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:09.083 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:09.083 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.084 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:09.084 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.009 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:11.009 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:11.009 00:18:11.009 real 0m18.040s 00:18:11.009 user 0m22.662s 00:18:11.009 sys 0m7.012s 00:18:11.009 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:11.009 19:00:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:11.009 ************************************ 00:18:11.009 END TEST nvmf_fips 00:18:11.009 ************************************ 00:18:11.009 19:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:18:11.009 19:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:18:11.009 19:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:18:11.009 19:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:18:11.009 19:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:18:11.009 19:00:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:12.903 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:12.903 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:18:12.903 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:12.903 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:12.903 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:12.903 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:12.903 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:12.903 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:18:12.903 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:12.903 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:18:12.903 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:12.904 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:12.904 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:12.904 Found net devices under 0000:09:00.0: cvl_0_0 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:12.904 Found net devices under 0000:09:00.1: cvl_0_1 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:12.904 19:00:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:13.161 ************************************ 00:18:13.161 START TEST nvmf_perf_adq 00:18:13.161 ************************************ 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:13.161 * Looking for test storage... 00:18:13.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:13.161 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:15.690 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:15.690 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:15.690 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:15.690 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:15.690 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:15.690 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:15.690 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:15.690 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:15.690 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:15.690 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:15.690 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:15.690 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:15.690 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:15.690 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:15.690 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:15.690 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:15.691 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:15.691 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:15.691 Found net devices under 0000:09:00.0: cvl_0_0 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:15.691 Found net devices under 0000:09:00.1: cvl_0_1 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:18:15.691 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:18:15.949 19:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:18:17.844 19:00:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:23.118 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:23.118 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:23.118 Found net devices under 0000:09:00.0: cvl_0_0 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:23.118 Found net devices under 0000:09:00.1: cvl_0_1 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:18:23.118 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:23.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:23.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:18:23.119 00:18:23.119 --- 10.0.0.2 ping statistics --- 00:18:23.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.119 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:23.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:23.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:18:23.119 00:18:23.119 --- 10.0.0.1 ping statistics --- 00:18:23.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.119 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3182290 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3182290 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3182290 ']' 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:23.119 [2024-07-24 19:01:00.482947] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:18:23.119 [2024-07-24 19:01:00.483037] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.119 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.119 [2024-07-24 19:01:00.551614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:23.119 [2024-07-24 19:01:00.664792] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.119 [2024-07-24 19:01:00.664843] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.119 [2024-07-24 19:01:00.664872] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.119 [2024-07-24 19:01:00.664883] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.119 [2024-07-24 19:01:00.664892] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.119 [2024-07-24 19:01:00.664970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.119 [2024-07-24 19:01:00.664995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.119 [2024-07-24 19:01:00.665054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:23.119 [2024-07-24 19:01:00.665057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:23.119 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:23.378 [2024-07-24 19:01:00.887679] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:23.378 Malloc1 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:23.378 [2024-07-24 19:01:00.939596] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3182433 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:18:23.378 19:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:23.378 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.915 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:25.915 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.915 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:25.915 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.915 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:18:25.915 "tick_rate": 2700000000, 00:18:25.915 "poll_groups": [ 00:18:25.915 { 00:18:25.915 "name": "nvmf_tgt_poll_group_000", 00:18:25.915 "admin_qpairs": 1, 00:18:25.915 "io_qpairs": 1, 00:18:25.915 "current_admin_qpairs": 1, 00:18:25.915 "current_io_qpairs": 1, 00:18:25.915 "pending_bdev_io": 0, 00:18:25.915 "completed_nvme_io": 19585, 00:18:25.915 "transports": [ 00:18:25.915 { 00:18:25.915 "trtype": "TCP" 00:18:25.915 } 00:18:25.915 ] 00:18:25.915 }, 00:18:25.915 { 00:18:25.915 "name": "nvmf_tgt_poll_group_001", 00:18:25.916 "admin_qpairs": 0, 00:18:25.916 "io_qpairs": 1, 00:18:25.916 "current_admin_qpairs": 0, 00:18:25.916 "current_io_qpairs": 1, 00:18:25.916 "pending_bdev_io": 0, 00:18:25.916 "completed_nvme_io": 19777, 00:18:25.916 "transports": [ 00:18:25.916 { 00:18:25.916 "trtype": "TCP" 00:18:25.916 } 00:18:25.916 ] 00:18:25.916 }, 00:18:25.916 { 00:18:25.916 "name": "nvmf_tgt_poll_group_002", 00:18:25.916 "admin_qpairs": 0, 00:18:25.916 "io_qpairs": 1, 00:18:25.916 "current_admin_qpairs": 0, 00:18:25.916 "current_io_qpairs": 1, 00:18:25.916 "pending_bdev_io": 0, 00:18:25.916 "completed_nvme_io": 20907, 00:18:25.916 "transports": [ 00:18:25.916 { 00:18:25.916 "trtype": "TCP" 00:18:25.916 } 00:18:25.916 ] 00:18:25.916 }, 00:18:25.916 { 00:18:25.916 "name": "nvmf_tgt_poll_group_003", 00:18:25.916 "admin_qpairs": 0, 00:18:25.916 "io_qpairs": 1, 00:18:25.916 "current_admin_qpairs": 0, 00:18:25.916 "current_io_qpairs": 1, 00:18:25.916 "pending_bdev_io": 0, 00:18:25.916 "completed_nvme_io": 20165, 00:18:25.916 "transports": [ 00:18:25.916 { 00:18:25.916 "trtype": "TCP" 00:18:25.916 } 00:18:25.916 ] 00:18:25.916 } 00:18:25.916 ] 00:18:25.916 }' 00:18:25.916 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:18:25.916 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:18:25.916 19:01:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:18:25.916 19:01:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:18:25.916 19:01:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3182433 00:18:34.022 Initializing NVMe Controllers 00:18:34.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:34.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:18:34.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:18:34.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:18:34.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:18:34.022 Initialization complete. Launching workers. 00:18:34.022 ======================================================== 00:18:34.022 Latency(us) 00:18:34.022 Device Information : IOPS MiB/s Average min max 00:18:34.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10733.83 41.93 5962.00 2281.39 9880.98 00:18:34.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10565.03 41.27 6057.30 2036.11 8673.46 00:18:34.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10523.23 41.11 6081.80 4151.36 8324.51 00:18:34.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11143.13 43.53 5743.26 1922.12 7783.68 00:18:34.022 ======================================================== 00:18:34.022 Total : 42965.21 167.83 5958.04 1922.12 9880.98 00:18:34.022 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:34.022 rmmod nvme_tcp 00:18:34.022 rmmod nvme_fabrics 00:18:34.022 rmmod nvme_keyring 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3182290 ']' 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3182290 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3182290 ']' 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3182290 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3182290 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3182290' 00:18:34.022 killing process with pid 3182290 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3182290 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3182290 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.022 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.553 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:36.553 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:18:36.553 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:18:36.812 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:18:38.825 19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:18:44.111 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:18:44.111 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:44.111 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.111 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:44.111 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:44.111 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:44.111 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.111 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.111 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.111 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:44.111 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:44.111 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:44.111 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:44.111 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:44.112 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:44.112 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:44.112 Found net devices under 0000:09:00.0: cvl_0_0 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:44.112 Found net devices under 0000:09:00.1: cvl_0_1 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:44.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:44.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:18:44.112 00:18:44.112 --- 10.0.0.2 ping statistics --- 00:18:44.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.112 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:44.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:44.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:18:44.112 00:18:44.112 --- 10.0.0.1 ping statistics --- 00:18:44.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.112 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:44.112 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:18:44.113 net.core.busy_poll = 1 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:18:44.113 net.core.busy_read = 1 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3185070 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3185070 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3185070 ']' 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:44.113 [2024-07-24 19:01:21.481895] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:18:44.113 [2024-07-24 19:01:21.481987] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.113 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.113 [2024-07-24 19:01:21.547783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:44.113 [2024-07-24 19:01:21.658787] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.113 [2024-07-24 19:01:21.658855] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.113 [2024-07-24 19:01:21.658869] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.113 [2024-07-24 19:01:21.658880] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.113 [2024-07-24 19:01:21.658889] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.113 [2024-07-24 19:01:21.659009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.113 [2024-07-24 19:01:21.659075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.113 [2024-07-24 19:01:21.659334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:44.113 [2024-07-24 19:01:21.659339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:44.113 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:44.371 [2024-07-24 19:01:21.879797] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:44.371 Malloc1 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.371 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:44.372 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.372 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:44.372 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.372 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:44.372 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.372 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:44.372 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.372 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:44.372 [2024-07-24 19:01:21.931372] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:44.372 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.372 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3185104 00:18:44.372 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:18:44.372 19:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:44.372 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.900 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:18:46.900 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.900 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:46.900 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.900 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:18:46.900 "tick_rate": 2700000000, 00:18:46.900 "poll_groups": [ 00:18:46.900 { 00:18:46.900 "name": "nvmf_tgt_poll_group_000", 00:18:46.900 "admin_qpairs": 1, 00:18:46.900 "io_qpairs": 1, 00:18:46.900 "current_admin_qpairs": 1, 00:18:46.900 "current_io_qpairs": 1, 00:18:46.900 "pending_bdev_io": 0, 00:18:46.900 "completed_nvme_io": 25370, 00:18:46.900 "transports": [ 00:18:46.900 { 00:18:46.900 "trtype": "TCP" 00:18:46.900 } 00:18:46.900 ] 00:18:46.900 }, 00:18:46.900 { 00:18:46.900 "name": "nvmf_tgt_poll_group_001", 00:18:46.900 "admin_qpairs": 0, 00:18:46.900 "io_qpairs": 3, 00:18:46.900 "current_admin_qpairs": 0, 00:18:46.900 "current_io_qpairs": 3, 00:18:46.900 "pending_bdev_io": 0, 00:18:46.900 "completed_nvme_io": 26918, 00:18:46.900 "transports": [ 00:18:46.900 { 00:18:46.900 "trtype": "TCP" 00:18:46.900 } 00:18:46.900 ] 00:18:46.900 }, 00:18:46.900 { 00:18:46.900 "name": "nvmf_tgt_poll_group_002", 00:18:46.900 "admin_qpairs": 0, 00:18:46.900 "io_qpairs": 0, 00:18:46.900 "current_admin_qpairs": 0, 00:18:46.900 "current_io_qpairs": 0, 00:18:46.900 "pending_bdev_io": 0, 00:18:46.900 "completed_nvme_io": 0, 00:18:46.900 "transports": [ 00:18:46.900 { 00:18:46.901 "trtype": "TCP" 00:18:46.901 } 00:18:46.901 ] 00:18:46.901 }, 00:18:46.901 { 00:18:46.901 "name": "nvmf_tgt_poll_group_003", 00:18:46.901 "admin_qpairs": 0, 00:18:46.901 "io_qpairs": 0, 00:18:46.901 "current_admin_qpairs": 0, 00:18:46.901 "current_io_qpairs": 0, 00:18:46.901 "pending_bdev_io": 0, 00:18:46.901 "completed_nvme_io": 0, 00:18:46.901 "transports": [ 00:18:46.901 { 00:18:46.901 "trtype": "TCP" 00:18:46.901 } 00:18:46.901 ] 00:18:46.901 } 00:18:46.901 ] 00:18:46.901 }' 00:18:46.901 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:18:46.901 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:18:46.901 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:18:46.901 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:18:46.901 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3185104 00:18:55.180 Initializing NVMe Controllers 00:18:55.180 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:55.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:18:55.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:18:55.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:18:55.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:18:55.180 Initialization complete. Launching workers. 00:18:55.180 ======================================================== 00:18:55.180 Latency(us) 00:18:55.180 Device Information : IOPS MiB/s Average min max 00:18:55.180 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4632.00 18.09 13826.08 2438.39 61615.42 00:18:55.180 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4418.10 17.26 14497.44 2667.59 60701.02 00:18:55.180 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5055.00 19.75 12669.93 1744.22 62884.17 00:18:55.180 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13418.90 52.42 4769.63 1193.89 6908.45 00:18:55.180 ======================================================== 00:18:55.180 Total : 27524.00 107.52 9306.18 1193.89 62884.17 00:18:55.180 00:18:55.180 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:18:55.180 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:55.180 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:18:55.180 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:55.180 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:18:55.180 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:55.180 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:55.180 rmmod nvme_tcp 00:18:55.180 rmmod nvme_fabrics 00:18:55.180 rmmod nvme_keyring 00:18:55.180 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:55.180 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:18:55.180 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:18:55.180 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3185070 ']' 00:18:55.180 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3185070 00:18:55.180 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3185070 ']' 00:18:55.180 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3185070 00:18:55.181 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:18:55.181 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:55.181 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3185070 00:18:55.181 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:55.181 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:55.181 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3185070' 00:18:55.181 killing process with pid 3185070 00:18:55.181 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3185070 00:18:55.181 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3185070 00:18:55.181 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:55.181 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:55.181 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:55.181 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:55.181 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:55.181 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.181 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.181 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.463 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:58.463 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:18:58.463 00:18:58.463 real 0m44.979s 00:18:58.463 user 2m34.746s 00:18:58.463 sys 0m11.518s 00:18:58.463 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:58.463 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:58.463 ************************************ 00:18:58.463 END TEST nvmf_perf_adq 00:18:58.463 ************************************ 00:18:58.463 19:01:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:18:58.463 19:01:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:58.463 19:01:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:58.463 19:01:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:58.463 ************************************ 00:18:58.463 START TEST nvmf_shutdown 00:18:58.463 ************************************ 00:18:58.463 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:18:58.463 * Looking for test storage... 00:18:58.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:58.463 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:58.463 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:18:58.463 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:58.464 ************************************ 00:18:58.464 START TEST nvmf_shutdown_tc1 00:18:58.464 ************************************ 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:58.464 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:59.840 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:59.840 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.840 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:59.840 Found net devices under 0000:09:00.0: cvl_0_0 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:59.841 Found net devices under 0000:09:00.1: cvl_0_1 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:59.841 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:00.099 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:00.099 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:00.099 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:00.099 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:00.099 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:00.099 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:00.099 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:00.099 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:00.099 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:00.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:19:00.099 00:19:00.100 --- 10.0.0.2 ping statistics --- 00:19:00.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.100 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:00.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:19:00.100 00:19:00.100 --- 10.0.0.1 ping statistics --- 00:19:00.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.100 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3188392 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3188392 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3188392 ']' 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:00.100 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:00.100 [2024-07-24 19:01:37.639306] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:19:00.100 [2024-07-24 19:01:37.639379] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.100 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.358 [2024-07-24 19:01:37.710096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:00.358 [2024-07-24 19:01:37.830098] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.358 [2024-07-24 19:01:37.830175] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.358 [2024-07-24 19:01:37.830191] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.358 [2024-07-24 19:01:37.830204] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.358 [2024-07-24 19:01:37.830216] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.358 [2024-07-24 19:01:37.830273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.358 [2024-07-24 19:01:37.830402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:00.358 [2024-07-24 19:01:37.830612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:00.358 [2024-07-24 19:01:37.830616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.358 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:00.358 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:19:00.358 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:00.358 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:00.358 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:00.617 [2024-07-24 19:01:37.972310] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:00.617 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:00.617 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:00.617 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:00.617 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:00.617 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.617 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:00.617 Malloc1 00:19:00.617 [2024-07-24 19:01:38.047687] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.617 Malloc2 00:19:00.617 Malloc3 00:19:00.617 Malloc4 00:19:00.875 Malloc5 00:19:00.875 Malloc6 00:19:00.875 Malloc7 00:19:00.875 Malloc8 00:19:00.875 Malloc9 00:19:00.875 Malloc10 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3188566 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3188566 /var/tmp/bdevperf.sock 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3188566 ']' 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:01.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.133 { 00:19:01.133 "params": { 00:19:01.133 "name": "Nvme$subsystem", 00:19:01.133 "trtype": "$TEST_TRANSPORT", 00:19:01.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.133 "adrfam": "ipv4", 00:19:01.133 "trsvcid": "$NVMF_PORT", 00:19:01.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.133 "hdgst": ${hdgst:-false}, 00:19:01.133 "ddgst": ${ddgst:-false} 00:19:01.133 }, 00:19:01.133 "method": "bdev_nvme_attach_controller" 00:19:01.133 } 00:19:01.133 EOF 00:19:01.133 )") 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.133 { 00:19:01.133 "params": { 00:19:01.133 "name": "Nvme$subsystem", 00:19:01.133 "trtype": "$TEST_TRANSPORT", 00:19:01.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.133 "adrfam": "ipv4", 00:19:01.133 "trsvcid": "$NVMF_PORT", 00:19:01.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.133 "hdgst": ${hdgst:-false}, 00:19:01.133 "ddgst": ${ddgst:-false} 00:19:01.133 }, 00:19:01.133 "method": "bdev_nvme_attach_controller" 00:19:01.133 } 00:19:01.133 EOF 00:19:01.133 )") 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.133 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.133 { 00:19:01.133 "params": { 00:19:01.133 "name": "Nvme$subsystem", 00:19:01.133 "trtype": "$TEST_TRANSPORT", 00:19:01.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.133 "adrfam": "ipv4", 00:19:01.133 "trsvcid": "$NVMF_PORT", 00:19:01.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.133 "hdgst": ${hdgst:-false}, 00:19:01.133 "ddgst": ${ddgst:-false} 00:19:01.133 }, 00:19:01.133 "method": "bdev_nvme_attach_controller" 00:19:01.133 } 00:19:01.133 EOF 00:19:01.133 )") 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.134 { 00:19:01.134 "params": { 00:19:01.134 "name": "Nvme$subsystem", 00:19:01.134 "trtype": "$TEST_TRANSPORT", 00:19:01.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.134 "adrfam": "ipv4", 00:19:01.134 "trsvcid": "$NVMF_PORT", 00:19:01.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.134 "hdgst": ${hdgst:-false}, 00:19:01.134 "ddgst": ${ddgst:-false} 00:19:01.134 }, 00:19:01.134 "method": "bdev_nvme_attach_controller" 00:19:01.134 } 00:19:01.134 EOF 00:19:01.134 )") 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.134 { 00:19:01.134 "params": { 00:19:01.134 "name": "Nvme$subsystem", 00:19:01.134 "trtype": "$TEST_TRANSPORT", 00:19:01.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.134 "adrfam": "ipv4", 00:19:01.134 "trsvcid": "$NVMF_PORT", 00:19:01.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.134 "hdgst": ${hdgst:-false}, 00:19:01.134 "ddgst": ${ddgst:-false} 00:19:01.134 }, 00:19:01.134 "method": "bdev_nvme_attach_controller" 00:19:01.134 } 00:19:01.134 EOF 00:19:01.134 )") 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.134 { 00:19:01.134 "params": { 00:19:01.134 "name": "Nvme$subsystem", 00:19:01.134 "trtype": "$TEST_TRANSPORT", 00:19:01.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.134 "adrfam": "ipv4", 00:19:01.134 "trsvcid": "$NVMF_PORT", 00:19:01.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.134 "hdgst": ${hdgst:-false}, 00:19:01.134 "ddgst": ${ddgst:-false} 00:19:01.134 }, 00:19:01.134 "method": "bdev_nvme_attach_controller" 00:19:01.134 } 00:19:01.134 EOF 00:19:01.134 )") 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.134 { 00:19:01.134 "params": { 00:19:01.134 "name": "Nvme$subsystem", 00:19:01.134 "trtype": "$TEST_TRANSPORT", 00:19:01.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.134 "adrfam": "ipv4", 00:19:01.134 "trsvcid": "$NVMF_PORT", 00:19:01.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.134 "hdgst": ${hdgst:-false}, 00:19:01.134 "ddgst": ${ddgst:-false} 00:19:01.134 }, 00:19:01.134 "method": "bdev_nvme_attach_controller" 00:19:01.134 } 00:19:01.134 EOF 00:19:01.134 )") 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.134 { 00:19:01.134 "params": { 00:19:01.134 "name": "Nvme$subsystem", 00:19:01.134 "trtype": "$TEST_TRANSPORT", 00:19:01.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.134 "adrfam": "ipv4", 00:19:01.134 "trsvcid": "$NVMF_PORT", 00:19:01.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.134 "hdgst": ${hdgst:-false}, 00:19:01.134 "ddgst": ${ddgst:-false} 00:19:01.134 }, 00:19:01.134 "method": "bdev_nvme_attach_controller" 00:19:01.134 } 00:19:01.134 EOF 00:19:01.134 )") 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.134 { 00:19:01.134 "params": { 00:19:01.134 "name": "Nvme$subsystem", 00:19:01.134 "trtype": "$TEST_TRANSPORT", 00:19:01.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.134 "adrfam": "ipv4", 00:19:01.134 "trsvcid": "$NVMF_PORT", 00:19:01.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.134 "hdgst": ${hdgst:-false}, 00:19:01.134 "ddgst": ${ddgst:-false} 00:19:01.134 }, 00:19:01.134 "method": "bdev_nvme_attach_controller" 00:19:01.134 } 00:19:01.134 EOF 00:19:01.134 )") 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.134 { 00:19:01.134 "params": { 00:19:01.134 "name": "Nvme$subsystem", 00:19:01.134 "trtype": "$TEST_TRANSPORT", 00:19:01.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.134 "adrfam": "ipv4", 00:19:01.134 "trsvcid": "$NVMF_PORT", 00:19:01.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.134 "hdgst": ${hdgst:-false}, 00:19:01.134 "ddgst": ${ddgst:-false} 00:19:01.134 }, 00:19:01.134 "method": "bdev_nvme_attach_controller" 00:19:01.134 } 00:19:01.134 EOF 00:19:01.134 )") 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:01.134 19:01:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:01.134 "params": { 00:19:01.134 "name": "Nvme1", 00:19:01.134 "trtype": "tcp", 00:19:01.134 "traddr": "10.0.0.2", 00:19:01.134 "adrfam": "ipv4", 00:19:01.134 "trsvcid": "4420", 00:19:01.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:01.134 "hdgst": false, 00:19:01.134 "ddgst": false 00:19:01.134 }, 00:19:01.134 "method": "bdev_nvme_attach_controller" 00:19:01.134 },{ 00:19:01.134 "params": { 00:19:01.134 "name": "Nvme2", 00:19:01.134 "trtype": "tcp", 00:19:01.134 "traddr": "10.0.0.2", 00:19:01.134 "adrfam": "ipv4", 00:19:01.134 "trsvcid": "4420", 00:19:01.134 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:01.134 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:01.134 "hdgst": false, 00:19:01.134 "ddgst": false 00:19:01.134 }, 00:19:01.134 "method": "bdev_nvme_attach_controller" 00:19:01.134 },{ 00:19:01.134 "params": { 00:19:01.134 "name": "Nvme3", 00:19:01.134 "trtype": "tcp", 00:19:01.134 "traddr": "10.0.0.2", 00:19:01.134 "adrfam": "ipv4", 00:19:01.134 "trsvcid": "4420", 00:19:01.134 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:01.134 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:01.134 "hdgst": false, 00:19:01.134 "ddgst": false 00:19:01.134 }, 00:19:01.134 "method": "bdev_nvme_attach_controller" 00:19:01.134 },{ 00:19:01.134 "params": { 00:19:01.134 "name": "Nvme4", 00:19:01.134 "trtype": "tcp", 00:19:01.134 "traddr": "10.0.0.2", 00:19:01.134 "adrfam": "ipv4", 00:19:01.134 "trsvcid": "4420", 00:19:01.134 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:01.134 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:01.134 "hdgst": false, 00:19:01.134 "ddgst": false 00:19:01.134 }, 00:19:01.134 "method": "bdev_nvme_attach_controller" 00:19:01.134 },{ 00:19:01.134 "params": { 00:19:01.134 "name": "Nvme5", 00:19:01.134 "trtype": "tcp", 00:19:01.134 "traddr": "10.0.0.2", 00:19:01.134 "adrfam": "ipv4", 00:19:01.134 "trsvcid": "4420", 00:19:01.134 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:01.134 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:01.134 "hdgst": false, 00:19:01.134 "ddgst": false 00:19:01.134 }, 00:19:01.134 "method": "bdev_nvme_attach_controller" 00:19:01.134 },{ 00:19:01.134 "params": { 00:19:01.134 "name": "Nvme6", 00:19:01.134 "trtype": "tcp", 00:19:01.134 "traddr": "10.0.0.2", 00:19:01.134 "adrfam": "ipv4", 00:19:01.134 "trsvcid": "4420", 00:19:01.134 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:01.134 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:01.134 "hdgst": false, 00:19:01.135 "ddgst": false 00:19:01.135 }, 00:19:01.135 "method": "bdev_nvme_attach_controller" 00:19:01.135 },{ 00:19:01.135 "params": { 00:19:01.135 "name": "Nvme7", 00:19:01.135 "trtype": "tcp", 00:19:01.135 "traddr": "10.0.0.2", 00:19:01.135 "adrfam": "ipv4", 00:19:01.135 "trsvcid": "4420", 00:19:01.135 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:01.135 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:01.135 "hdgst": false, 00:19:01.135 "ddgst": false 00:19:01.135 }, 00:19:01.135 "method": "bdev_nvme_attach_controller" 00:19:01.135 },{ 00:19:01.135 "params": { 00:19:01.135 "name": "Nvme8", 00:19:01.135 "trtype": "tcp", 00:19:01.135 "traddr": "10.0.0.2", 00:19:01.135 "adrfam": "ipv4", 00:19:01.135 "trsvcid": "4420", 00:19:01.135 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:01.135 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:01.135 "hdgst": false, 00:19:01.135 "ddgst": false 00:19:01.135 }, 00:19:01.135 "method": "bdev_nvme_attach_controller" 00:19:01.135 },{ 00:19:01.135 "params": { 00:19:01.135 "name": "Nvme9", 00:19:01.135 "trtype": "tcp", 00:19:01.135 "traddr": "10.0.0.2", 00:19:01.135 "adrfam": "ipv4", 00:19:01.135 "trsvcid": "4420", 00:19:01.135 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:01.135 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:01.135 "hdgst": false, 00:19:01.135 "ddgst": false 00:19:01.135 }, 00:19:01.135 "method": "bdev_nvme_attach_controller" 00:19:01.135 },{ 00:19:01.135 "params": { 00:19:01.135 "name": "Nvme10", 00:19:01.135 "trtype": "tcp", 00:19:01.135 "traddr": "10.0.0.2", 00:19:01.135 "adrfam": "ipv4", 00:19:01.135 "trsvcid": "4420", 00:19:01.135 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:01.135 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:01.135 "hdgst": false, 00:19:01.135 "ddgst": false 00:19:01.135 }, 00:19:01.135 "method": "bdev_nvme_attach_controller" 00:19:01.135 }' 00:19:01.135 [2024-07-24 19:01:38.563505] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:19:01.135 [2024-07-24 19:01:38.563591] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:01.135 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.135 [2024-07-24 19:01:38.626748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.393 [2024-07-24 19:01:38.737498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.767 19:01:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:02.767 19:01:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:19:02.767 19:01:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:02.767 19:01:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.767 19:01:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:02.767 19:01:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.767 19:01:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3188566 00:19:02.767 19:01:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:02.767 19:01:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:03.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3188566 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3188392 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:03.701 { 00:19:03.701 "params": { 00:19:03.701 "name": "Nvme$subsystem", 00:19:03.701 "trtype": "$TEST_TRANSPORT", 00:19:03.701 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:03.701 "adrfam": "ipv4", 00:19:03.701 "trsvcid": "$NVMF_PORT", 00:19:03.701 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:03.701 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:03.701 "hdgst": ${hdgst:-false}, 00:19:03.701 "ddgst": ${ddgst:-false} 00:19:03.701 }, 00:19:03.701 "method": "bdev_nvme_attach_controller" 00:19:03.701 } 00:19:03.701 EOF 00:19:03.701 )") 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:03.701 { 00:19:03.701 "params": { 00:19:03.701 "name": "Nvme$subsystem", 00:19:03.701 "trtype": "$TEST_TRANSPORT", 00:19:03.701 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:03.701 "adrfam": "ipv4", 00:19:03.701 "trsvcid": "$NVMF_PORT", 00:19:03.701 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:03.701 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:03.701 "hdgst": ${hdgst:-false}, 00:19:03.701 "ddgst": ${ddgst:-false} 00:19:03.701 }, 00:19:03.701 "method": "bdev_nvme_attach_controller" 00:19:03.701 } 00:19:03.701 EOF 00:19:03.701 )") 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:03.701 { 00:19:03.701 "params": { 00:19:03.701 "name": "Nvme$subsystem", 00:19:03.701 "trtype": "$TEST_TRANSPORT", 00:19:03.701 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:03.701 "adrfam": "ipv4", 00:19:03.701 "trsvcid": "$NVMF_PORT", 00:19:03.701 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:03.701 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:03.701 "hdgst": ${hdgst:-false}, 00:19:03.701 "ddgst": ${ddgst:-false} 00:19:03.701 }, 00:19:03.701 "method": "bdev_nvme_attach_controller" 00:19:03.701 } 00:19:03.701 EOF 00:19:03.701 )") 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:03.701 { 00:19:03.701 "params": { 00:19:03.701 "name": "Nvme$subsystem", 00:19:03.701 "trtype": "$TEST_TRANSPORT", 00:19:03.701 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:03.701 "adrfam": "ipv4", 00:19:03.701 "trsvcid": "$NVMF_PORT", 00:19:03.701 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:03.701 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:03.701 "hdgst": ${hdgst:-false}, 00:19:03.701 "ddgst": ${ddgst:-false} 00:19:03.701 }, 00:19:03.701 "method": "bdev_nvme_attach_controller" 00:19:03.701 } 00:19:03.701 EOF 00:19:03.701 )") 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:03.701 { 00:19:03.701 "params": { 00:19:03.701 "name": "Nvme$subsystem", 00:19:03.701 "trtype": "$TEST_TRANSPORT", 00:19:03.701 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:03.701 "adrfam": "ipv4", 00:19:03.701 "trsvcid": "$NVMF_PORT", 00:19:03.701 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:03.701 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:03.701 "hdgst": ${hdgst:-false}, 00:19:03.701 "ddgst": ${ddgst:-false} 00:19:03.701 }, 00:19:03.701 "method": "bdev_nvme_attach_controller" 00:19:03.701 } 00:19:03.701 EOF 00:19:03.701 )") 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:03.701 { 00:19:03.701 "params": { 00:19:03.701 "name": "Nvme$subsystem", 00:19:03.701 "trtype": "$TEST_TRANSPORT", 00:19:03.701 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:03.701 "adrfam": "ipv4", 00:19:03.701 "trsvcid": "$NVMF_PORT", 00:19:03.701 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:03.701 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:03.701 "hdgst": ${hdgst:-false}, 00:19:03.701 "ddgst": ${ddgst:-false} 00:19:03.701 }, 00:19:03.701 "method": "bdev_nvme_attach_controller" 00:19:03.701 } 00:19:03.701 EOF 00:19:03.701 )") 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:03.701 { 00:19:03.701 "params": { 00:19:03.701 "name": "Nvme$subsystem", 00:19:03.701 "trtype": "$TEST_TRANSPORT", 00:19:03.701 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:03.701 "adrfam": "ipv4", 00:19:03.701 "trsvcid": "$NVMF_PORT", 00:19:03.701 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:03.701 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:03.701 "hdgst": ${hdgst:-false}, 00:19:03.701 "ddgst": ${ddgst:-false} 00:19:03.701 }, 00:19:03.701 "method": "bdev_nvme_attach_controller" 00:19:03.701 } 00:19:03.701 EOF 00:19:03.701 )") 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:03.701 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:03.701 { 00:19:03.701 "params": { 00:19:03.701 "name": "Nvme$subsystem", 00:19:03.701 "trtype": "$TEST_TRANSPORT", 00:19:03.701 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:03.701 "adrfam": "ipv4", 00:19:03.701 "trsvcid": "$NVMF_PORT", 00:19:03.701 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:03.701 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:03.701 "hdgst": ${hdgst:-false}, 00:19:03.701 "ddgst": ${ddgst:-false} 00:19:03.702 }, 00:19:03.702 "method": "bdev_nvme_attach_controller" 00:19:03.702 } 00:19:03.702 EOF 00:19:03.702 )") 00:19:03.702 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:03.702 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:03.702 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:03.702 { 00:19:03.702 "params": { 00:19:03.702 "name": "Nvme$subsystem", 00:19:03.702 "trtype": "$TEST_TRANSPORT", 00:19:03.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:03.702 "adrfam": "ipv4", 00:19:03.702 "trsvcid": "$NVMF_PORT", 00:19:03.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:03.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:03.702 "hdgst": ${hdgst:-false}, 00:19:03.702 "ddgst": ${ddgst:-false} 00:19:03.702 }, 00:19:03.702 "method": "bdev_nvme_attach_controller" 00:19:03.702 } 00:19:03.702 EOF 00:19:03.702 )") 00:19:03.702 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:03.702 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:03.702 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:03.702 { 00:19:03.702 "params": { 00:19:03.702 "name": "Nvme$subsystem", 00:19:03.702 "trtype": "$TEST_TRANSPORT", 00:19:03.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:03.702 "adrfam": "ipv4", 00:19:03.702 "trsvcid": "$NVMF_PORT", 00:19:03.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:03.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:03.702 "hdgst": ${hdgst:-false}, 00:19:03.702 "ddgst": ${ddgst:-false} 00:19:03.702 }, 00:19:03.702 "method": "bdev_nvme_attach_controller" 00:19:03.702 } 00:19:03.702 EOF 00:19:03.702 )") 00:19:03.702 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:03.702 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:03.702 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:03.702 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:03.702 "params": { 00:19:03.702 "name": "Nvme1", 00:19:03.702 "trtype": "tcp", 00:19:03.702 "traddr": "10.0.0.2", 00:19:03.702 "adrfam": "ipv4", 00:19:03.702 "trsvcid": "4420", 00:19:03.702 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.702 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:03.702 "hdgst": false, 00:19:03.702 "ddgst": false 00:19:03.702 }, 00:19:03.702 "method": "bdev_nvme_attach_controller" 00:19:03.702 },{ 00:19:03.702 "params": { 00:19:03.702 "name": "Nvme2", 00:19:03.702 "trtype": "tcp", 00:19:03.702 "traddr": "10.0.0.2", 00:19:03.702 "adrfam": "ipv4", 00:19:03.702 "trsvcid": "4420", 00:19:03.702 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:03.702 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:03.702 "hdgst": false, 00:19:03.702 "ddgst": false 00:19:03.702 }, 00:19:03.702 "method": "bdev_nvme_attach_controller" 00:19:03.702 },{ 00:19:03.702 "params": { 00:19:03.702 "name": "Nvme3", 00:19:03.702 "trtype": "tcp", 00:19:03.702 "traddr": "10.0.0.2", 00:19:03.702 "adrfam": "ipv4", 00:19:03.702 "trsvcid": "4420", 00:19:03.702 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:03.702 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:03.702 "hdgst": false, 00:19:03.702 "ddgst": false 00:19:03.702 }, 00:19:03.702 "method": "bdev_nvme_attach_controller" 00:19:03.702 },{ 00:19:03.702 "params": { 00:19:03.702 "name": "Nvme4", 00:19:03.702 "trtype": "tcp", 00:19:03.702 "traddr": "10.0.0.2", 00:19:03.702 "adrfam": "ipv4", 00:19:03.702 "trsvcid": "4420", 00:19:03.702 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:03.702 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:03.702 "hdgst": false, 00:19:03.702 "ddgst": false 00:19:03.702 }, 00:19:03.702 "method": "bdev_nvme_attach_controller" 00:19:03.702 },{ 00:19:03.702 "params": { 00:19:03.702 "name": "Nvme5", 00:19:03.702 "trtype": "tcp", 00:19:03.702 "traddr": "10.0.0.2", 00:19:03.702 "adrfam": "ipv4", 00:19:03.702 "trsvcid": "4420", 00:19:03.702 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:03.702 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:03.702 "hdgst": false, 00:19:03.702 "ddgst": false 00:19:03.702 }, 00:19:03.702 "method": "bdev_nvme_attach_controller" 00:19:03.702 },{ 00:19:03.702 "params": { 00:19:03.702 "name": "Nvme6", 00:19:03.702 "trtype": "tcp", 00:19:03.702 "traddr": "10.0.0.2", 00:19:03.702 "adrfam": "ipv4", 00:19:03.702 "trsvcid": "4420", 00:19:03.702 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:03.702 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:03.702 "hdgst": false, 00:19:03.702 "ddgst": false 00:19:03.702 }, 00:19:03.702 "method": "bdev_nvme_attach_controller" 00:19:03.702 },{ 00:19:03.702 "params": { 00:19:03.702 "name": "Nvme7", 00:19:03.702 "trtype": "tcp", 00:19:03.702 "traddr": "10.0.0.2", 00:19:03.702 "adrfam": "ipv4", 00:19:03.702 "trsvcid": "4420", 00:19:03.702 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:03.702 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:03.702 "hdgst": false, 00:19:03.702 "ddgst": false 00:19:03.702 }, 00:19:03.702 "method": "bdev_nvme_attach_controller" 00:19:03.702 },{ 00:19:03.702 "params": { 00:19:03.702 "name": "Nvme8", 00:19:03.702 "trtype": "tcp", 00:19:03.702 "traddr": "10.0.0.2", 00:19:03.702 "adrfam": "ipv4", 00:19:03.702 "trsvcid": "4420", 00:19:03.702 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:03.702 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:03.702 "hdgst": false, 00:19:03.702 "ddgst": false 00:19:03.702 }, 00:19:03.702 "method": "bdev_nvme_attach_controller" 00:19:03.702 },{ 00:19:03.702 "params": { 00:19:03.702 "name": "Nvme9", 00:19:03.702 "trtype": "tcp", 00:19:03.702 "traddr": "10.0.0.2", 00:19:03.702 "adrfam": "ipv4", 00:19:03.702 "trsvcid": "4420", 00:19:03.702 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:03.702 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:03.702 "hdgst": false, 00:19:03.702 "ddgst": false 00:19:03.702 }, 00:19:03.702 "method": "bdev_nvme_attach_controller" 00:19:03.702 },{ 00:19:03.702 "params": { 00:19:03.702 "name": "Nvme10", 00:19:03.702 "trtype": "tcp", 00:19:03.702 "traddr": "10.0.0.2", 00:19:03.702 "adrfam": "ipv4", 00:19:03.702 "trsvcid": "4420", 00:19:03.702 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:03.702 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:03.702 "hdgst": false, 00:19:03.702 "ddgst": false 00:19:03.702 }, 00:19:03.702 "method": "bdev_nvme_attach_controller" 00:19:03.702 }' 00:19:03.702 [2024-07-24 19:01:41.281406] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:19:03.702 [2024-07-24 19:01:41.281512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3188871 ] 00:19:03.960 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.961 [2024-07-24 19:01:41.344956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.961 [2024-07-24 19:01:41.455645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.332 Running I/O for 1 seconds... 00:19:06.750 00:19:06.750 Latency(us) 00:19:06.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.750 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:06.750 Verification LBA range: start 0x0 length 0x400 00:19:06.750 Nvme1n1 : 1.09 235.28 14.71 0.00 0.00 269054.48 23107.51 246997.90 00:19:06.750 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:06.750 Verification LBA range: start 0x0 length 0x400 00:19:06.750 Nvme2n1 : 1.13 226.73 14.17 0.00 0.00 274965.81 21845.33 256318.58 00:19:06.750 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:06.750 Verification LBA range: start 0x0 length 0x400 00:19:06.750 Nvme3n1 : 1.07 239.00 14.94 0.00 0.00 255252.48 23787.14 250104.79 00:19:06.750 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:06.750 Verification LBA range: start 0x0 length 0x400 00:19:06.750 Nvme4n1 : 1.12 227.75 14.23 0.00 0.00 263613.25 22427.88 251658.24 00:19:06.750 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:06.750 Verification LBA range: start 0x0 length 0x400 00:19:06.750 Nvme5n1 : 1.09 243.30 15.21 0.00 0.00 240257.42 8883.77 251658.24 00:19:06.750 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:06.750 Verification LBA range: start 0x0 length 0x400 00:19:06.750 Nvme6n1 : 1.14 224.40 14.03 0.00 0.00 259502.46 22427.88 262532.36 00:19:06.750 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:06.750 Verification LBA range: start 0x0 length 0x400 00:19:06.750 Nvme7n1 : 1.14 225.18 14.07 0.00 0.00 254195.67 22330.79 248551.35 00:19:06.750 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:06.750 Verification LBA range: start 0x0 length 0x400 00:19:06.750 Nvme8n1 : 1.17 219.09 13.69 0.00 0.00 257485.94 21845.33 298261.62 00:19:06.750 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:06.750 Verification LBA range: start 0x0 length 0x400 00:19:06.750 Nvme9n1 : 1.18 272.14 17.01 0.00 0.00 204039.81 16699.54 231463.44 00:19:06.750 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:06.750 Verification LBA range: start 0x0 length 0x400 00:19:06.750 Nvme10n1 : 1.19 269.78 16.86 0.00 0.00 202291.81 5121.52 264085.81 00:19:06.750 =================================================================================================================== 00:19:06.750 Total : 2382.63 148.91 0.00 0.00 245906.80 5121.52 298261.62 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:07.008 rmmod nvme_tcp 00:19:07.008 rmmod nvme_fabrics 00:19:07.008 rmmod nvme_keyring 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3188392 ']' 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3188392 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 3188392 ']' 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 3188392 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3188392 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3188392' 00:19:07.008 killing process with pid 3188392 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 3188392 00:19:07.008 19:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 3188392 00:19:07.575 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:07.575 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:07.575 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:07.575 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:07.575 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:07.575 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.575 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.575 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.478 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:09.478 00:19:09.478 real 0m11.427s 00:19:09.478 user 0m32.682s 00:19:09.478 sys 0m3.147s 00:19:09.478 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:09.478 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:09.478 ************************************ 00:19:09.478 END TEST nvmf_shutdown_tc1 00:19:09.478 ************************************ 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:09.737 ************************************ 00:19:09.737 START TEST nvmf_shutdown_tc2 00:19:09.737 ************************************ 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:09.737 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:09.737 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:09.737 Found net devices under 0000:09:00.0: cvl_0_0 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:09.737 Found net devices under 0000:09:00.1: cvl_0_1 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:09.737 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:09.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:09.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:19:09.738 00:19:09.738 --- 10.0.0.2 ping statistics --- 00:19:09.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.738 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:09.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:09.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:19:09.738 00:19:09.738 --- 10.0.0.1 ping statistics --- 00:19:09.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.738 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3189640 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3189640 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3189640 ']' 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:09.738 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:09.997 [2024-07-24 19:01:47.342937] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:19:09.997 [2024-07-24 19:01:47.343032] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.997 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.997 [2024-07-24 19:01:47.409821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:09.997 [2024-07-24 19:01:47.520238] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.997 [2024-07-24 19:01:47.520300] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.997 [2024-07-24 19:01:47.520313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:09.997 [2024-07-24 19:01:47.520325] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:09.997 [2024-07-24 19:01:47.520334] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.997 [2024-07-24 19:01:47.520384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.997 [2024-07-24 19:01:47.520441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:09.997 [2024-07-24 19:01:47.520484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:09.997 [2024-07-24 19:01:47.520487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:10.254 [2024-07-24 19:01:47.685698] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.254 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:10.254 Malloc1 00:19:10.254 [2024-07-24 19:01:47.767634] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.254 Malloc2 00:19:10.254 Malloc3 00:19:10.511 Malloc4 00:19:10.511 Malloc5 00:19:10.511 Malloc6 00:19:10.511 Malloc7 00:19:10.511 Malloc8 00:19:10.769 Malloc9 00:19:10.769 Malloc10 00:19:10.769 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.769 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:10.769 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:10.769 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:10.769 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3189821 00:19:10.769 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3189821 /var/tmp/bdevperf.sock 00:19:10.769 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3189821 ']' 00:19:10.769 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:10.769 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:10.769 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:10.769 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:10.769 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:10.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:10.769 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:10.770 { 00:19:10.770 "params": { 00:19:10.770 "name": "Nvme$subsystem", 00:19:10.770 "trtype": "$TEST_TRANSPORT", 00:19:10.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.770 "adrfam": "ipv4", 00:19:10.770 "trsvcid": "$NVMF_PORT", 00:19:10.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.770 "hdgst": ${hdgst:-false}, 00:19:10.770 "ddgst": ${ddgst:-false} 00:19:10.770 }, 00:19:10.770 "method": "bdev_nvme_attach_controller" 00:19:10.770 } 00:19:10.770 EOF 00:19:10.770 )") 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:10.770 { 00:19:10.770 "params": { 00:19:10.770 "name": "Nvme$subsystem", 00:19:10.770 "trtype": "$TEST_TRANSPORT", 00:19:10.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.770 "adrfam": "ipv4", 00:19:10.770 "trsvcid": "$NVMF_PORT", 00:19:10.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.770 "hdgst": ${hdgst:-false}, 00:19:10.770 "ddgst": ${ddgst:-false} 00:19:10.770 }, 00:19:10.770 "method": "bdev_nvme_attach_controller" 00:19:10.770 } 00:19:10.770 EOF 00:19:10.770 )") 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:10.770 { 00:19:10.770 "params": { 00:19:10.770 "name": "Nvme$subsystem", 00:19:10.770 "trtype": "$TEST_TRANSPORT", 00:19:10.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.770 "adrfam": "ipv4", 00:19:10.770 "trsvcid": "$NVMF_PORT", 00:19:10.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.770 "hdgst": ${hdgst:-false}, 00:19:10.770 "ddgst": ${ddgst:-false} 00:19:10.770 }, 00:19:10.770 "method": "bdev_nvme_attach_controller" 00:19:10.770 } 00:19:10.770 EOF 00:19:10.770 )") 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:10.770 { 00:19:10.770 "params": { 00:19:10.770 "name": "Nvme$subsystem", 00:19:10.770 "trtype": "$TEST_TRANSPORT", 00:19:10.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.770 "adrfam": "ipv4", 00:19:10.770 "trsvcid": "$NVMF_PORT", 00:19:10.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.770 "hdgst": ${hdgst:-false}, 00:19:10.770 "ddgst": ${ddgst:-false} 00:19:10.770 }, 00:19:10.770 "method": "bdev_nvme_attach_controller" 00:19:10.770 } 00:19:10.770 EOF 00:19:10.770 )") 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:10.770 { 00:19:10.770 "params": { 00:19:10.770 "name": "Nvme$subsystem", 00:19:10.770 "trtype": "$TEST_TRANSPORT", 00:19:10.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.770 "adrfam": "ipv4", 00:19:10.770 "trsvcid": "$NVMF_PORT", 00:19:10.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.770 "hdgst": ${hdgst:-false}, 00:19:10.770 "ddgst": ${ddgst:-false} 00:19:10.770 }, 00:19:10.770 "method": "bdev_nvme_attach_controller" 00:19:10.770 } 00:19:10.770 EOF 00:19:10.770 )") 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:10.770 { 00:19:10.770 "params": { 00:19:10.770 "name": "Nvme$subsystem", 00:19:10.770 "trtype": "$TEST_TRANSPORT", 00:19:10.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.770 "adrfam": "ipv4", 00:19:10.770 "trsvcid": "$NVMF_PORT", 00:19:10.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.770 "hdgst": ${hdgst:-false}, 00:19:10.770 "ddgst": ${ddgst:-false} 00:19:10.770 }, 00:19:10.770 "method": "bdev_nvme_attach_controller" 00:19:10.770 } 00:19:10.770 EOF 00:19:10.770 )") 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:10.770 { 00:19:10.770 "params": { 00:19:10.770 "name": "Nvme$subsystem", 00:19:10.770 "trtype": "$TEST_TRANSPORT", 00:19:10.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.770 "adrfam": "ipv4", 00:19:10.770 "trsvcid": "$NVMF_PORT", 00:19:10.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.770 "hdgst": ${hdgst:-false}, 00:19:10.770 "ddgst": ${ddgst:-false} 00:19:10.770 }, 00:19:10.770 "method": "bdev_nvme_attach_controller" 00:19:10.770 } 00:19:10.770 EOF 00:19:10.770 )") 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:10.770 { 00:19:10.770 "params": { 00:19:10.770 "name": "Nvme$subsystem", 00:19:10.770 "trtype": "$TEST_TRANSPORT", 00:19:10.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.770 "adrfam": "ipv4", 00:19:10.770 "trsvcid": "$NVMF_PORT", 00:19:10.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.770 "hdgst": ${hdgst:-false}, 00:19:10.770 "ddgst": ${ddgst:-false} 00:19:10.770 }, 00:19:10.770 "method": "bdev_nvme_attach_controller" 00:19:10.770 } 00:19:10.770 EOF 00:19:10.770 )") 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:10.770 { 00:19:10.770 "params": { 00:19:10.770 "name": "Nvme$subsystem", 00:19:10.770 "trtype": "$TEST_TRANSPORT", 00:19:10.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.770 "adrfam": "ipv4", 00:19:10.770 "trsvcid": "$NVMF_PORT", 00:19:10.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.770 "hdgst": ${hdgst:-false}, 00:19:10.770 "ddgst": ${ddgst:-false} 00:19:10.770 }, 00:19:10.770 "method": "bdev_nvme_attach_controller" 00:19:10.770 } 00:19:10.770 EOF 00:19:10.770 )") 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:10.770 { 00:19:10.770 "params": { 00:19:10.770 "name": "Nvme$subsystem", 00:19:10.770 "trtype": "$TEST_TRANSPORT", 00:19:10.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.770 "adrfam": "ipv4", 00:19:10.770 "trsvcid": "$NVMF_PORT", 00:19:10.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.770 "hdgst": ${hdgst:-false}, 00:19:10.770 "ddgst": ${ddgst:-false} 00:19:10.770 }, 00:19:10.770 "method": "bdev_nvme_attach_controller" 00:19:10.770 } 00:19:10.770 EOF 00:19:10.770 )") 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:10.770 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:19:10.771 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:19:10.771 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:10.771 "params": { 00:19:10.771 "name": "Nvme1", 00:19:10.771 "trtype": "tcp", 00:19:10.771 "traddr": "10.0.0.2", 00:19:10.771 "adrfam": "ipv4", 00:19:10.771 "trsvcid": "4420", 00:19:10.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:10.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:10.771 "hdgst": false, 00:19:10.771 "ddgst": false 00:19:10.771 }, 00:19:10.771 "method": "bdev_nvme_attach_controller" 00:19:10.771 },{ 00:19:10.771 "params": { 00:19:10.771 "name": "Nvme2", 00:19:10.771 "trtype": "tcp", 00:19:10.771 "traddr": "10.0.0.2", 00:19:10.771 "adrfam": "ipv4", 00:19:10.771 "trsvcid": "4420", 00:19:10.771 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:10.771 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:10.771 "hdgst": false, 00:19:10.771 "ddgst": false 00:19:10.771 }, 00:19:10.771 "method": "bdev_nvme_attach_controller" 00:19:10.771 },{ 00:19:10.771 "params": { 00:19:10.771 "name": "Nvme3", 00:19:10.771 "trtype": "tcp", 00:19:10.771 "traddr": "10.0.0.2", 00:19:10.771 "adrfam": "ipv4", 00:19:10.771 "trsvcid": "4420", 00:19:10.771 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:10.771 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:10.771 "hdgst": false, 00:19:10.771 "ddgst": false 00:19:10.771 }, 00:19:10.771 "method": "bdev_nvme_attach_controller" 00:19:10.771 },{ 00:19:10.771 "params": { 00:19:10.771 "name": "Nvme4", 00:19:10.771 "trtype": "tcp", 00:19:10.771 "traddr": "10.0.0.2", 00:19:10.771 "adrfam": "ipv4", 00:19:10.771 "trsvcid": "4420", 00:19:10.771 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:10.771 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:10.771 "hdgst": false, 00:19:10.771 "ddgst": false 00:19:10.771 }, 00:19:10.771 "method": "bdev_nvme_attach_controller" 00:19:10.771 },{ 00:19:10.771 "params": { 00:19:10.771 "name": "Nvme5", 00:19:10.771 "trtype": "tcp", 00:19:10.771 "traddr": "10.0.0.2", 00:19:10.771 "adrfam": "ipv4", 00:19:10.771 "trsvcid": "4420", 00:19:10.771 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:10.771 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:10.771 "hdgst": false, 00:19:10.771 "ddgst": false 00:19:10.771 }, 00:19:10.771 "method": "bdev_nvme_attach_controller" 00:19:10.771 },{ 00:19:10.771 "params": { 00:19:10.771 "name": "Nvme6", 00:19:10.771 "trtype": "tcp", 00:19:10.771 "traddr": "10.0.0.2", 00:19:10.771 "adrfam": "ipv4", 00:19:10.771 "trsvcid": "4420", 00:19:10.771 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:10.771 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:10.771 "hdgst": false, 00:19:10.771 "ddgst": false 00:19:10.771 }, 00:19:10.771 "method": "bdev_nvme_attach_controller" 00:19:10.771 },{ 00:19:10.771 "params": { 00:19:10.771 "name": "Nvme7", 00:19:10.771 "trtype": "tcp", 00:19:10.771 "traddr": "10.0.0.2", 00:19:10.771 "adrfam": "ipv4", 00:19:10.771 "trsvcid": "4420", 00:19:10.771 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:10.771 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:10.771 "hdgst": false, 00:19:10.771 "ddgst": false 00:19:10.771 }, 00:19:10.771 "method": "bdev_nvme_attach_controller" 00:19:10.771 },{ 00:19:10.771 "params": { 00:19:10.771 "name": "Nvme8", 00:19:10.771 "trtype": "tcp", 00:19:10.771 "traddr": "10.0.0.2", 00:19:10.771 "adrfam": "ipv4", 00:19:10.771 "trsvcid": "4420", 00:19:10.771 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:10.771 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:10.771 "hdgst": false, 00:19:10.771 "ddgst": false 00:19:10.771 }, 00:19:10.771 "method": "bdev_nvme_attach_controller" 00:19:10.771 },{ 00:19:10.771 "params": { 00:19:10.771 "name": "Nvme9", 00:19:10.771 "trtype": "tcp", 00:19:10.771 "traddr": "10.0.0.2", 00:19:10.771 "adrfam": "ipv4", 00:19:10.771 "trsvcid": "4420", 00:19:10.771 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:10.771 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:10.771 "hdgst": false, 00:19:10.771 "ddgst": false 00:19:10.771 }, 00:19:10.771 "method": "bdev_nvme_attach_controller" 00:19:10.771 },{ 00:19:10.771 "params": { 00:19:10.771 "name": "Nvme10", 00:19:10.771 "trtype": "tcp", 00:19:10.771 "traddr": "10.0.0.2", 00:19:10.771 "adrfam": "ipv4", 00:19:10.771 "trsvcid": "4420", 00:19:10.771 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:10.771 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:10.771 "hdgst": false, 00:19:10.771 "ddgst": false 00:19:10.771 }, 00:19:10.771 "method": "bdev_nvme_attach_controller" 00:19:10.771 }' 00:19:10.771 [2024-07-24 19:01:48.266666] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:19:10.771 [2024-07-24 19:01:48.266757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3189821 ] 00:19:10.771 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.771 [2024-07-24 19:01:48.329596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.029 [2024-07-24 19:01:48.440238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.926 Running I/O for 10 seconds... 00:19:12.926 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:12.926 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:19:12.926 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:12.926 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.926 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:12.926 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.926 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:12.926 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:12.926 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:12.926 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:19:12.926 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:19:12.926 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:12.926 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:12.926 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:12.926 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:12.926 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.926 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:12.926 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.926 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:12.926 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:12.926 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:13.184 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:13.184 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:13.184 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:13.184 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:13.184 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.184 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:13.184 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.184 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:13.184 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:13.184 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3189821 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3189821 ']' 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3189821 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3189821 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3189821' 00:19:13.443 killing process with pid 3189821 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3189821 00:19:13.443 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3189821 00:19:13.443 Received shutdown signal, test time was about 0.936211 seconds 00:19:13.443 00:19:13.443 Latency(us) 00:19:13.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.443 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:13.443 Verification LBA range: start 0x0 length 0x400 00:19:13.443 Nvme1n1 : 0.92 207.89 12.99 0.00 0.00 304156.57 21262.79 278066.82 00:19:13.443 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:13.443 Verification LBA range: start 0x0 length 0x400 00:19:13.443 Nvme2n1 : 0.91 210.64 13.16 0.00 0.00 293768.91 24758.04 260978.92 00:19:13.443 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:13.443 Verification LBA range: start 0x0 length 0x400 00:19:13.443 Nvme3n1 : 0.89 215.33 13.46 0.00 0.00 281144.38 18932.62 271853.04 00:19:13.443 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:13.443 Verification LBA range: start 0x0 length 0x400 00:19:13.443 Nvme4n1 : 0.93 274.55 17.16 0.00 0.00 215213.99 9272.13 274959.93 00:19:13.443 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:13.443 Verification LBA range: start 0x0 length 0x400 00:19:13.443 Nvme5n1 : 0.94 205.26 12.83 0.00 0.00 283836.37 23981.32 318456.41 00:19:13.443 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:13.443 Verification LBA range: start 0x0 length 0x400 00:19:13.443 Nvme6n1 : 0.93 206.87 12.93 0.00 0.00 275329.58 21165.70 293601.28 00:19:13.443 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:13.443 Verification LBA range: start 0x0 length 0x400 00:19:13.443 Nvme7n1 : 0.92 208.70 13.04 0.00 0.00 266700.67 35729.26 257872.02 00:19:13.443 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:13.443 Verification LBA range: start 0x0 length 0x400 00:19:13.443 Nvme8n1 : 0.90 213.60 13.35 0.00 0.00 253651.12 22816.24 287387.50 00:19:13.443 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:13.443 Verification LBA range: start 0x0 length 0x400 00:19:13.443 Nvme9n1 : 0.91 212.09 13.26 0.00 0.00 250070.91 35340.89 259425.47 00:19:13.443 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:13.443 Verification LBA range: start 0x0 length 0x400 00:19:13.443 Nvme10n1 : 0.91 211.50 13.22 0.00 0.00 244910.96 41166.32 253211.69 00:19:13.443 =================================================================================================================== 00:19:13.443 Total : 2166.43 135.40 0.00 0.00 265211.75 9272.13 318456.41 00:19:13.700 19:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:19:15.071 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3189640 00:19:15.071 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:19:15.071 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:15.071 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:15.071 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:15.071 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:15.071 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:15.071 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:19:15.072 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:15.072 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:19:15.072 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:15.072 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:15.072 rmmod nvme_tcp 00:19:15.072 rmmod nvme_fabrics 00:19:15.072 rmmod nvme_keyring 00:19:15.072 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:15.072 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:19:15.072 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:19:15.072 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3189640 ']' 00:19:15.072 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3189640 00:19:15.072 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3189640 ']' 00:19:15.072 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3189640 00:19:15.072 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:19:15.072 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:15.072 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3189640 00:19:15.072 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:15.072 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:15.072 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3189640' 00:19:15.072 killing process with pid 3189640 00:19:15.072 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3189640 00:19:15.072 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3189640 00:19:15.638 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:15.638 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:15.638 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:15.638 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:15.638 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:15.638 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.638 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.638 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.543 19:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:17.543 00:19:17.543 real 0m7.888s 00:19:17.543 user 0m23.834s 00:19:17.543 sys 0m1.504s 00:19:17.543 19:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:17.543 19:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:17.543 ************************************ 00:19:17.543 END TEST nvmf_shutdown_tc2 00:19:17.543 ************************************ 00:19:17.543 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:17.543 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:17.543 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:17.543 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:17.543 ************************************ 00:19:17.543 START TEST nvmf_shutdown_tc3 00:19:17.543 ************************************ 00:19:17.543 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:19:17.543 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:19:17.543 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:17.543 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:17.543 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.543 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:17.543 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:17.543 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:17.543 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.543 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:17.543 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.543 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:17.543 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:17.543 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:17.543 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:17.543 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:17.543 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:17.544 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:17.544 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:17.544 Found net devices under 0000:09:00.0: cvl_0_0 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:17.544 Found net devices under 0000:09:00.1: cvl_0_1 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:17.544 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:17.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:19:17.809 00:19:17.809 --- 10.0.0.2 ping statistics --- 00:19:17.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.809 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:17.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:19:17.809 00:19:17.809 --- 10.0.0.1 ping statistics --- 00:19:17.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.809 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3190739 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3190739 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3190739 ']' 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:17.809 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:17.809 [2024-07-24 19:01:55.279677] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:19:17.809 [2024-07-24 19:01:55.279760] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.809 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.809 [2024-07-24 19:01:55.343845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:18.067 [2024-07-24 19:01:55.456820] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.067 [2024-07-24 19:01:55.456877] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.067 [2024-07-24 19:01:55.456906] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.067 [2024-07-24 19:01:55.456917] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.067 [2024-07-24 19:01:55.456928] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.067 [2024-07-24 19:01:55.457010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.067 [2024-07-24 19:01:55.457074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:18.067 [2024-07-24 19:01:55.457141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:18.067 [2024-07-24 19:01:55.457145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.067 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:18.067 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:19:18.067 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:18.067 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:18.067 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:18.067 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.067 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:18.067 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.067 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:18.067 [2024-07-24 19:01:55.617738] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.068 19:01:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:18.326 Malloc1 00:19:18.326 [2024-07-24 19:01:55.704148] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.326 Malloc2 00:19:18.326 Malloc3 00:19:18.326 Malloc4 00:19:18.326 Malloc5 00:19:18.326 Malloc6 00:19:18.585 Malloc7 00:19:18.585 Malloc8 00:19:18.585 Malloc9 00:19:18.585 Malloc10 00:19:18.585 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.585 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:18.585 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:18.585 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:18.585 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3190918 00:19:18.585 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3190918 /var/tmp/bdevperf.sock 00:19:18.585 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3190918 ']' 00:19:18.585 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.585 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:18.585 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:18.585 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:18.585 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:19:18.585 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.585 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:19:18.585 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:18.585 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:18.585 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.585 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.585 { 00:19:18.585 "params": { 00:19:18.585 "name": "Nvme$subsystem", 00:19:18.585 "trtype": "$TEST_TRANSPORT", 00:19:18.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.585 "adrfam": "ipv4", 00:19:18.585 "trsvcid": "$NVMF_PORT", 00:19:18.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.585 "hdgst": ${hdgst:-false}, 00:19:18.585 "ddgst": ${ddgst:-false} 00:19:18.585 }, 00:19:18.585 "method": "bdev_nvme_attach_controller" 00:19:18.585 } 00:19:18.585 EOF 00:19:18.585 )") 00:19:18.585 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:18.585 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.585 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.585 { 00:19:18.585 "params": { 00:19:18.585 "name": "Nvme$subsystem", 00:19:18.585 "trtype": "$TEST_TRANSPORT", 00:19:18.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.585 "adrfam": "ipv4", 00:19:18.585 "trsvcid": "$NVMF_PORT", 00:19:18.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.585 "hdgst": ${hdgst:-false}, 00:19:18.585 "ddgst": ${ddgst:-false} 00:19:18.585 }, 00:19:18.585 "method": "bdev_nvme_attach_controller" 00:19:18.585 } 00:19:18.585 EOF 00:19:18.585 )") 00:19:18.585 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:18.844 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.844 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.844 { 00:19:18.844 "params": { 00:19:18.844 "name": "Nvme$subsystem", 00:19:18.844 "trtype": "$TEST_TRANSPORT", 00:19:18.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.844 "adrfam": "ipv4", 00:19:18.844 "trsvcid": "$NVMF_PORT", 00:19:18.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.844 "hdgst": ${hdgst:-false}, 00:19:18.844 "ddgst": ${ddgst:-false} 00:19:18.844 }, 00:19:18.844 "method": "bdev_nvme_attach_controller" 00:19:18.844 } 00:19:18.844 EOF 00:19:18.844 )") 00:19:18.844 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:18.844 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.844 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.844 { 00:19:18.844 "params": { 00:19:18.844 "name": "Nvme$subsystem", 00:19:18.844 "trtype": "$TEST_TRANSPORT", 00:19:18.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.844 "adrfam": "ipv4", 00:19:18.844 "trsvcid": "$NVMF_PORT", 00:19:18.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.844 "hdgst": ${hdgst:-false}, 00:19:18.844 "ddgst": ${ddgst:-false} 00:19:18.845 }, 00:19:18.845 "method": "bdev_nvme_attach_controller" 00:19:18.845 } 00:19:18.845 EOF 00:19:18.845 )") 00:19:18.845 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:18.845 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.845 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.845 { 00:19:18.845 "params": { 00:19:18.845 "name": "Nvme$subsystem", 00:19:18.845 "trtype": "$TEST_TRANSPORT", 00:19:18.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.845 "adrfam": "ipv4", 00:19:18.845 "trsvcid": "$NVMF_PORT", 00:19:18.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.845 "hdgst": ${hdgst:-false}, 00:19:18.845 "ddgst": ${ddgst:-false} 00:19:18.845 }, 00:19:18.845 "method": "bdev_nvme_attach_controller" 00:19:18.845 } 00:19:18.845 EOF 00:19:18.845 )") 00:19:18.845 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:18.845 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.845 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.845 { 00:19:18.845 "params": { 00:19:18.845 "name": "Nvme$subsystem", 00:19:18.845 "trtype": "$TEST_TRANSPORT", 00:19:18.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.845 "adrfam": "ipv4", 00:19:18.845 "trsvcid": "$NVMF_PORT", 00:19:18.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.845 "hdgst": ${hdgst:-false}, 00:19:18.845 "ddgst": ${ddgst:-false} 00:19:18.845 }, 00:19:18.845 "method": "bdev_nvme_attach_controller" 00:19:18.845 } 00:19:18.845 EOF 00:19:18.845 )") 00:19:18.845 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:18.845 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.845 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.845 { 00:19:18.845 "params": { 00:19:18.845 "name": "Nvme$subsystem", 00:19:18.845 "trtype": "$TEST_TRANSPORT", 00:19:18.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.845 "adrfam": "ipv4", 00:19:18.845 "trsvcid": "$NVMF_PORT", 00:19:18.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.845 "hdgst": ${hdgst:-false}, 00:19:18.845 "ddgst": ${ddgst:-false} 00:19:18.845 }, 00:19:18.845 "method": "bdev_nvme_attach_controller" 00:19:18.845 } 00:19:18.845 EOF 00:19:18.845 )") 00:19:18.845 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:18.845 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.845 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.845 { 00:19:18.845 "params": { 00:19:18.845 "name": "Nvme$subsystem", 00:19:18.845 "trtype": "$TEST_TRANSPORT", 00:19:18.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.845 "adrfam": "ipv4", 00:19:18.845 "trsvcid": "$NVMF_PORT", 00:19:18.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.845 "hdgst": ${hdgst:-false}, 00:19:18.845 "ddgst": ${ddgst:-false} 00:19:18.845 }, 00:19:18.845 "method": "bdev_nvme_attach_controller" 00:19:18.845 } 00:19:18.845 EOF 00:19:18.845 )") 00:19:18.845 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:18.845 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.845 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.845 { 00:19:18.845 "params": { 00:19:18.845 "name": "Nvme$subsystem", 00:19:18.845 "trtype": "$TEST_TRANSPORT", 00:19:18.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.845 "adrfam": "ipv4", 00:19:18.845 "trsvcid": "$NVMF_PORT", 00:19:18.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.845 "hdgst": ${hdgst:-false}, 00:19:18.845 "ddgst": ${ddgst:-false} 00:19:18.845 }, 00:19:18.845 "method": "bdev_nvme_attach_controller" 00:19:18.845 } 00:19:18.845 EOF 00:19:18.845 )") 00:19:18.845 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:18.845 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.845 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.845 { 00:19:18.845 "params": { 00:19:18.845 "name": "Nvme$subsystem", 00:19:18.845 "trtype": "$TEST_TRANSPORT", 00:19:18.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.845 "adrfam": "ipv4", 00:19:18.845 "trsvcid": "$NVMF_PORT", 00:19:18.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.845 "hdgst": ${hdgst:-false}, 00:19:18.845 "ddgst": ${ddgst:-false} 00:19:18.845 }, 00:19:18.845 "method": "bdev_nvme_attach_controller" 00:19:18.845 } 00:19:18.845 EOF 00:19:18.845 )") 00:19:18.845 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:18.845 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:19:18.845 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:19:18.845 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:18.845 "params": { 00:19:18.845 "name": "Nvme1", 00:19:18.845 "trtype": "tcp", 00:19:18.845 "traddr": "10.0.0.2", 00:19:18.845 "adrfam": "ipv4", 00:19:18.845 "trsvcid": "4420", 00:19:18.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:18.845 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:18.845 "hdgst": false, 00:19:18.845 "ddgst": false 00:19:18.845 }, 00:19:18.845 "method": "bdev_nvme_attach_controller" 00:19:18.845 },{ 00:19:18.845 "params": { 00:19:18.845 "name": "Nvme2", 00:19:18.845 "trtype": "tcp", 00:19:18.845 "traddr": "10.0.0.2", 00:19:18.845 "adrfam": "ipv4", 00:19:18.845 "trsvcid": "4420", 00:19:18.845 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:18.845 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:18.845 "hdgst": false, 00:19:18.845 "ddgst": false 00:19:18.845 }, 00:19:18.845 "method": "bdev_nvme_attach_controller" 00:19:18.845 },{ 00:19:18.845 "params": { 00:19:18.845 "name": "Nvme3", 00:19:18.845 "trtype": "tcp", 00:19:18.845 "traddr": "10.0.0.2", 00:19:18.845 "adrfam": "ipv4", 00:19:18.845 "trsvcid": "4420", 00:19:18.845 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:18.845 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:18.845 "hdgst": false, 00:19:18.845 "ddgst": false 00:19:18.845 }, 00:19:18.845 "method": "bdev_nvme_attach_controller" 00:19:18.845 },{ 00:19:18.845 "params": { 00:19:18.845 "name": "Nvme4", 00:19:18.845 "trtype": "tcp", 00:19:18.845 "traddr": "10.0.0.2", 00:19:18.845 "adrfam": "ipv4", 00:19:18.845 "trsvcid": "4420", 00:19:18.845 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:18.845 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:18.845 "hdgst": false, 00:19:18.845 "ddgst": false 00:19:18.845 }, 00:19:18.845 "method": "bdev_nvme_attach_controller" 00:19:18.845 },{ 00:19:18.845 "params": { 00:19:18.845 "name": "Nvme5", 00:19:18.845 "trtype": "tcp", 00:19:18.845 "traddr": "10.0.0.2", 00:19:18.845 "adrfam": "ipv4", 00:19:18.845 "trsvcid": "4420", 00:19:18.845 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:18.845 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:18.845 "hdgst": false, 00:19:18.845 "ddgst": false 00:19:18.846 }, 00:19:18.846 "method": "bdev_nvme_attach_controller" 00:19:18.846 },{ 00:19:18.846 "params": { 00:19:18.846 "name": "Nvme6", 00:19:18.846 "trtype": "tcp", 00:19:18.846 "traddr": "10.0.0.2", 00:19:18.846 "adrfam": "ipv4", 00:19:18.846 "trsvcid": "4420", 00:19:18.846 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:18.846 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:18.846 "hdgst": false, 00:19:18.846 "ddgst": false 00:19:18.846 }, 00:19:18.846 "method": "bdev_nvme_attach_controller" 00:19:18.846 },{ 00:19:18.846 "params": { 00:19:18.846 "name": "Nvme7", 00:19:18.846 "trtype": "tcp", 00:19:18.846 "traddr": "10.0.0.2", 00:19:18.846 "adrfam": "ipv4", 00:19:18.846 "trsvcid": "4420", 00:19:18.846 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:18.846 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:18.846 "hdgst": false, 00:19:18.846 "ddgst": false 00:19:18.846 }, 00:19:18.846 "method": "bdev_nvme_attach_controller" 00:19:18.846 },{ 00:19:18.846 "params": { 00:19:18.846 "name": "Nvme8", 00:19:18.846 "trtype": "tcp", 00:19:18.846 "traddr": "10.0.0.2", 00:19:18.846 "adrfam": "ipv4", 00:19:18.846 "trsvcid": "4420", 00:19:18.846 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:18.846 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:18.846 "hdgst": false, 00:19:18.846 "ddgst": false 00:19:18.846 }, 00:19:18.846 "method": "bdev_nvme_attach_controller" 00:19:18.846 },{ 00:19:18.846 "params": { 00:19:18.846 "name": "Nvme9", 00:19:18.846 "trtype": "tcp", 00:19:18.846 "traddr": "10.0.0.2", 00:19:18.846 "adrfam": "ipv4", 00:19:18.846 "trsvcid": "4420", 00:19:18.846 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:18.846 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:18.846 "hdgst": false, 00:19:18.846 "ddgst": false 00:19:18.846 }, 00:19:18.846 "method": "bdev_nvme_attach_controller" 00:19:18.846 },{ 00:19:18.846 "params": { 00:19:18.846 "name": "Nvme10", 00:19:18.846 "trtype": "tcp", 00:19:18.846 "traddr": "10.0.0.2", 00:19:18.846 "adrfam": "ipv4", 00:19:18.846 "trsvcid": "4420", 00:19:18.846 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:18.846 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:18.846 "hdgst": false, 00:19:18.846 "ddgst": false 00:19:18.846 }, 00:19:18.846 "method": "bdev_nvme_attach_controller" 00:19:18.846 }' 00:19:18.846 [2024-07-24 19:01:56.224612] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:19:18.846 [2024-07-24 19:01:56.224707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3190918 ] 00:19:18.846 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.846 [2024-07-24 19:01:56.287982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.846 [2024-07-24 19:01:56.398517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.746 Running I/O for 10 seconds... 00:19:20.746 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:20.746 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:19:20.746 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:20.746 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.746 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:20.746 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.746 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:20.746 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:20.746 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:20.746 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:20.746 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:19:20.746 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:19:20.746 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:20.746 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:20.746 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:20.746 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:20.746 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.746 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:20.746 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.746 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:20.746 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:20.746 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:21.006 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:21.006 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:21.006 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:21.006 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:21.006 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.006 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:21.006 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.006 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:21.006 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:21.006 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:21.265 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:21.265 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:21.265 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:21.265 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:21.265 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.265 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:21.265 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.265 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=73 00:19:21.265 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 73 -ge 100 ']' 00:19:21.265 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3190739 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3190739 ']' 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3190739 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3190739 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3190739' 00:19:21.533 killing process with pid 3190739 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 3190739 00:19:21.533 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 3190739 00:19:21.533 [2024-07-24 19:01:59.078069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.078777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.079311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.079325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.079338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.079351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.079364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.079378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.079391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.533 [2024-07-24 19:01:59.079404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.079417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.079430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.079447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.079460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.079489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.079502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.079514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.079526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.079539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564d60 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.081098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.534 [2024-07-24 19:01:59.081226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.534 [2024-07-24 19:01:59.081245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.534 [2024-07-24 19:01:59.081260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.534 [2024-07-24 19:01:59.081274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.534 [2024-07-24 19:01:59.081288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.534 [2024-07-24 19:01:59.081302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.534 [2024-07-24 19:01:59.081316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.534 [2024-07-24 19:01:59.081330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2446830 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.081580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391550 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.082770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.082795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.082808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.082821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.082833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.082845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.082858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.082870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.082882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.082894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.082912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.082926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.082938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.082952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.082964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.082977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.082989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.534 [2024-07-24 19:01:59.083368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.083380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.083392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.083404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.083417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.083429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.083442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.083454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.083466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.083479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.083491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.083503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.083516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.083529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.083542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.083554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.083567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.083580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.083592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.083604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.083616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565220 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.086989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.087001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.087013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.087026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.087038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.087051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.087063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.535 [2024-07-24 19:01:59.087078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.087090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.087110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.087125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.087148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.087161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.087173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.087185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.087197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25656e0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.088876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.088912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.088929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.088942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.088955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.088968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.088981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.088993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.089736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2565bc0 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.090586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.090616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.090631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.536 [2024-07-24 19:01:59.090644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.090986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.091485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566080 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.092574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.092605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.092620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.092633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.092646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.092675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.092689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.092702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.092714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.537 [2024-07-24 19:01:59.092726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.092739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.092751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.092763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.092775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.092787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.092799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.092812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.092824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.092836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.092848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.092860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.092873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.092885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.092899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.092912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.092924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.092937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.092949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.092964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.092979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.092992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.093440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566560 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.094784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.094811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.094825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.094838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.094851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.094864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.094878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.094890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.538 [2024-07-24 19:01:59.094903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.094916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.094928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.094941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.094954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.094967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.094980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.094993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.095676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2566a20 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.539 [2024-07-24 19:01:59.097650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.097995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.098007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.098019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.098032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.098044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.098057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.098069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.098081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.098118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.098132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.098154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.098167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.098179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.098192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.098205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.109226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.540 [2024-07-24 19:01:59.109312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.540 [2024-07-24 19:01:59.109354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.540 [2024-07-24 19:01:59.109370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.540 [2024-07-24 19:01:59.109385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.540 [2024-07-24 19:01:59.109400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.540 [2024-07-24 19:01:59.109414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.540 [2024-07-24 19:01:59.109428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.540 [2024-07-24 19:01:59.109442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24727b0 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.109508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.540 [2024-07-24 19:01:59.109529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.540 [2024-07-24 19:01:59.109545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.540 [2024-07-24 19:01:59.109559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.540 [2024-07-24 19:01:59.109574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.540 [2024-07-24 19:01:59.109588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.540 [2024-07-24 19:01:59.109603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.540 [2024-07-24 19:01:59.109617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.540 [2024-07-24 19:01:59.109630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471090 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.109680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.540 [2024-07-24 19:01:59.109701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.540 [2024-07-24 19:01:59.109716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.540 [2024-07-24 19:01:59.109732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.540 [2024-07-24 19:01:59.109747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.540 [2024-07-24 19:01:59.109760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.540 [2024-07-24 19:01:59.109775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.540 [2024-07-24 19:01:59.109789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.540 [2024-07-24 19:01:59.109802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2527670 is same with the state(5) to be set 00:19:21.540 [2024-07-24 19:01:59.109855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.540 [2024-07-24 19:01:59.109876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.540 [2024-07-24 19:01:59.109892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.540 [2024-07-24 19:01:59.109907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.540 [2024-07-24 19:01:59.109922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.540 [2024-07-24 19:01:59.109935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.540 [2024-07-24 19:01:59.109949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.540 [2024-07-24 19:01:59.109964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.540 [2024-07-24 19:01:59.109978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520410 is same with the state(5) to be set 00:19:21.541 [2024-07-24 19:01:59.110027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.541 [2024-07-24 19:01:59.110048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.110064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.541 [2024-07-24 19:01:59.110078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.110092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.541 [2024-07-24 19:01:59.110114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.110130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.541 [2024-07-24 19:01:59.110154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.110167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2606fd0 is same with the state(5) to be set 00:19:21.541 [2024-07-24 19:01:59.110216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.541 [2024-07-24 19:01:59.110236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.110252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.541 [2024-07-24 19:01:59.110265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.110281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.541 [2024-07-24 19:01:59.110295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.110309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.541 [2024-07-24 19:01:59.110323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.110340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f51d0 is same with the state(5) to be set 00:19:21.541 [2024-07-24 19:01:59.110400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.541 [2024-07-24 19:01:59.110420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.110436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.541 [2024-07-24 19:01:59.110450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.110465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.541 [2024-07-24 19:01:59.110479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.110493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.541 [2024-07-24 19:01:59.110507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.110520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f65b0 is same with the state(5) to be set 00:19:21.541 [2024-07-24 19:01:59.110568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.541 [2024-07-24 19:01:59.110589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.110605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.541 [2024-07-24 19:01:59.110619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.110633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.541 [2024-07-24 19:01:59.110647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.110662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.541 [2024-07-24 19:01:59.110675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.110688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f48610 is same with the state(5) to be set 00:19:21.541 [2024-07-24 19:01:59.110723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2446830 (9): Bad file descriptor 00:19:21.541 [2024-07-24 19:01:59.110775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.541 [2024-07-24 19:01:59.110795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.110813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.541 [2024-07-24 19:01:59.110828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.110842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.541 [2024-07-24 19:01:59.110861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.110877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.541 [2024-07-24 19:01:59.110891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.110905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x246ab50 is same with the state(5) to be set 00:19:21.541 [2024-07-24 19:01:59.112020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.541 [2024-07-24 19:01:59.112049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.112077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.541 [2024-07-24 19:01:59.112094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.112120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.541 [2024-07-24 19:01:59.112137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.112160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.541 [2024-07-24 19:01:59.112175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.112192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.541 [2024-07-24 19:01:59.112206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.112223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.541 [2024-07-24 19:01:59.112238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.112254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.541 [2024-07-24 19:01:59.112269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.112285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.541 [2024-07-24 19:01:59.112300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.112316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.541 [2024-07-24 19:01:59.112331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.112347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.541 [2024-07-24 19:01:59.112361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.112378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.541 [2024-07-24 19:01:59.112409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.112427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.541 [2024-07-24 19:01:59.112443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.112460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.541 [2024-07-24 19:01:59.112475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.541 [2024-07-24 19:01:59.112492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.112507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.112524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.112538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.112556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.112570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.112588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.112603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.112619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.112634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.112651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.112666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.112682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.112697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.112713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.112728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.112744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.112759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.112775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.112790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.112810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.112825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.112842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.112857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.112873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.112888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.112905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.112919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.112936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.112950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.112966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.112982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.112999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.113014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.113030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.113045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.113062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.113077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.113093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.113118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.113135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.113156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.113172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.113187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.113203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.113221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.113238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.113252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.113269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.113283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.113299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.113314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.113331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.113345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.113362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.113381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.113397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.113412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.113428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.113443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.113460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.113474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.113494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.113510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.113527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.113543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.113559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.113574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.113591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.113606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.113626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.542 [2024-07-24 19:01:59.113642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.542 [2024-07-24 19:01:59.113660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.113675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.113692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.113707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.113724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.113739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.113756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.113770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.113787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.113802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.113818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.113834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.113852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.113866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.113883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.113898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.113915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.113931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.113949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.113964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.113981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.113996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.114013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.114032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.114050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.114066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.114083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.114098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.114124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.114140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.114194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:19:21.543 [2024-07-24 19:01:59.114279] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2f1bb30 was disconnected and freed. reset controller. 00:19:21.543 [2024-07-24 19:01:59.114425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.114456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.114493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.114522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.114551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.114578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.114607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.114634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.114656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.114672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.114691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.114707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.114724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.114739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.114756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.114771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.114788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.114817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.114836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.114851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.114868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.114882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.114899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.114913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.114929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.114944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.114961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.114976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.543 [2024-07-24 19:01:59.114992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.543 [2024-07-24 19:01:59.115006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.115979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.115994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.116010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.116025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.116046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.116061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.116077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.116092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.116116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.116132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.116155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.116171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.116187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.544 [2024-07-24 19:01:59.116202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.544 [2024-07-24 19:01:59.116218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.116232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.116249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.116263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.116279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.116294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.116310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.116324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.116341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.116366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.116382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.116402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.116419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.116433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.116450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.116468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.116485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.116499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.116515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.116530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.116546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.116560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.116576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.116590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.116606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.116620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.118060] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2505830 was disconnected and freed. reset controller. 00:19:21.545 [2024-07-24 19:01:59.121294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24727b0 (9): Bad file descriptor 00:19:21.545 [2024-07-24 19:01:59.121343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2471090 (9): Bad file descriptor 00:19:21.545 [2024-07-24 19:01:59.121378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2527670 (9): Bad file descriptor 00:19:21.545 [2024-07-24 19:01:59.121410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2520410 (9): Bad file descriptor 00:19:21.545 [2024-07-24 19:01:59.121440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2606fd0 (9): Bad file descriptor 00:19:21.545 [2024-07-24 19:01:59.121466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25f51d0 (9): Bad file descriptor 00:19:21.545 [2024-07-24 19:01:59.121496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25f65b0 (9): Bad file descriptor 00:19:21.545 [2024-07-24 19:01:59.121523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f48610 (9): Bad file descriptor 00:19:21.545 [2024-07-24 19:01:59.121560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246ab50 (9): Bad file descriptor 00:19:21.545 [2024-07-24 19:01:59.121637] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:21.545 [2024-07-24 19:01:59.122386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:21.545 [2024-07-24 19:01:59.122510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.122541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.122578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.122604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.122623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.122639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.122656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.122672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.122690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.122705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.122722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.122736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.122753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.122768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.122784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.122799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.122815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.122831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.122848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.122862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.122879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.122894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.122910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.122925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.122942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.122957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.122973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.122988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.123009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.123024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.123041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.123056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.123072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.123086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.545 [2024-07-24 19:01:59.123111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.545 [2024-07-24 19:01:59.123128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.123971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.123985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.124001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.124015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.124033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.124047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.124064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.124078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.124094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.124116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.124143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.124158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.124174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.124189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.124206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.124225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.124243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.124257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.546 [2024-07-24 19:01:59.124274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.546 [2024-07-24 19:01:59.124289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.547 [2024-07-24 19:01:59.124305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.547 [2024-07-24 19:01:59.124319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.547 [2024-07-24 19:01:59.124336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.547 [2024-07-24 19:01:59.124351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.547 [2024-07-24 19:01:59.124368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.547 [2024-07-24 19:01:59.124382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.547 [2024-07-24 19:01:59.124399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.547 [2024-07-24 19:01:59.124414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.547 [2024-07-24 19:01:59.124431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.547 [2024-07-24 19:01:59.124445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.547 [2024-07-24 19:01:59.124461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.547 [2024-07-24 19:01:59.124476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.547 [2024-07-24 19:01:59.124493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.547 [2024-07-24 19:01:59.124507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.547 [2024-07-24 19:01:59.124523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.547 [2024-07-24 19:01:59.124537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.547 [2024-07-24 19:01:59.124553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.547 [2024-07-24 19:01:59.124568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.547 [2024-07-24 19:01:59.124584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.547 [2024-07-24 19:01:59.124599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.547 [2024-07-24 19:01:59.124617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250c2a0 is same with the state(5) to be set 00:19:21.547 [2024-07-24 19:01:59.125933] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:21.813 [2024-07-24 19:01:59.127237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:21.813 [2024-07-24 19:01:59.127272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:21.813 [2024-07-24 19:01:59.127465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:21.813 [2024-07-24 19:01:59.127496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2606fd0 with addr=10.0.0.2, port=4420 00:19:21.813 [2024-07-24 19:01:59.127514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2606fd0 is same with the state(5) to be set 00:19:21.813 [2024-07-24 19:01:59.127631] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:21.813 [2024-07-24 19:01:59.127704] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:21.813 [2024-07-24 19:01:59.127771] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:21.813 [2024-07-24 19:01:59.127841] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:21.813 [2024-07-24 19:01:59.127915] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:21.813 [2024-07-24 19:01:59.128242] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:21.813 [2024-07-24 19:01:59.128419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:21.813 [2024-07-24 19:01:59.128455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25f65b0 with addr=10.0.0.2, port=4420 00:19:21.813 [2024-07-24 19:01:59.128483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f65b0 is same with the state(5) to be set 00:19:21.813 [2024-07-24 19:01:59.128619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:21.813 [2024-07-24 19:01:59.128645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2446830 with addr=10.0.0.2, port=4420 00:19:21.813 [2024-07-24 19:01:59.128662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2446830 is same with the state(5) to be set 00:19:21.813 [2024-07-24 19:01:59.128685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2606fd0 (9): Bad file descriptor 00:19:21.813 [2024-07-24 19:01:59.129078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25f65b0 (9): Bad file descriptor 00:19:21.813 [2024-07-24 19:01:59.129187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2446830 (9): Bad file descriptor 00:19:21.813 [2024-07-24 19:01:59.129211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:19:21.813 [2024-07-24 19:01:59.129226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:19:21.814 [2024-07-24 19:01:59.129243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:21.814 [2024-07-24 19:01:59.129315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:21.814 [2024-07-24 19:01:59.129337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:21.814 [2024-07-24 19:01:59.129351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:21.814 [2024-07-24 19:01:59.129364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:21.814 [2024-07-24 19:01:59.129384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:21.814 [2024-07-24 19:01:59.129398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:21.814 [2024-07-24 19:01:59.129412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:21.814 [2024-07-24 19:01:59.129483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:21.814 [2024-07-24 19:01:59.129503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:21.814 [2024-07-24 19:01:59.131454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.131481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.131508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.131524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.131541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.131557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.131574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.131588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.131604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.131619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.131636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.131650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.131666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.131680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.131697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.131712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.131729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.131743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.131759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.131774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.131790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.131804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.131821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.131836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.131857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.131872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.131889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.131904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.131920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.131935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.131951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.131966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.131983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.131998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.132014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.132029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.132045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.132060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.132077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.132091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.132114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.132131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.132148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.132162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.132179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.132193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.132210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.132224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.132240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.132259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.132275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.132290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.132306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.132320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.132337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.132351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.132368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.132382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.132399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.132413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.132429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.132444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.132460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.132475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.132491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.132506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.132523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.132537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.132555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.814 [2024-07-24 19:01:59.132570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.814 [2024-07-24 19:01:59.132587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.132601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.132618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.132632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.132653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.132668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.132685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.132700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.132716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.132731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.132748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.132762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.132779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.132793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.132810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.132825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.132841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.132856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.132873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.132887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.132904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.132918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.132935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.132956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.132973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.132987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.133004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.133019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.133035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.133053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.133070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.133085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.133114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.133131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.133148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.133163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.133179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.133194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.133210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.133224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.133240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.133254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.133270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.133284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.133301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.133315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.133332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.133346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.133362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.133376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.133392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.133407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.133423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.133437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.133457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.133478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.145388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.145448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.145466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a9de0 is same with the state(5) to be set 00:19:21.815 [2024-07-24 19:01:59.146834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.146859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.146885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.146903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.146922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.146937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.146953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.146968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.146984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.146999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.147015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.147030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.147047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.147070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.147086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.147100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.147128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.147143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.147160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.815 [2024-07-24 19:01:59.147175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.815 [2024-07-24 19:01:59.147204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.147973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.147995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.148010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.148027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.148041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.148057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.148072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.148088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.148108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.148126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.148140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.148157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.148171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.148188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.148202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.148218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.148232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.148249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.148263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.148280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.148294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.148310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.148324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.148341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.148356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.148372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.148391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.148408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.148422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.816 [2024-07-24 19:01:59.148439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.816 [2024-07-24 19:01:59.148453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.148469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.148483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.148500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.148515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.148531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.148545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.148561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.148576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.148592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.148607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.148623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.148637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.148654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.148668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.148685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.148699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.148716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.148731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.148747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.148762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.148783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.148798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.148814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.148829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.148845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.148859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.148876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.148891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.148905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ab2d0 is same with the state(5) to be set 00:19:21.817 [2024-07-24 19:01:59.150179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.817 [2024-07-24 19:01:59.150923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.817 [2024-07-24 19:01:59.150937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.818 [2024-07-24 19:01:59.150954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.818 [2024-07-24 19:01:59.150969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.818 [2024-07-24 19:01:59.150986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.818 [2024-07-24 19:01:59.151000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.818 [2024-07-24 19:01:59.151016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.818 [2024-07-24 19:01:59.151031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.818 [2024-07-24 19:01:59.151047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.818 [2024-07-24 19:01:59.151062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.818 [2024-07-24 19:01:59.151079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.818 [2024-07-24 19:01:59.151093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.818 [2024-07-24 19:01:59.151116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.818 [2024-07-24 19:01:59.151133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.818 [2024-07-24 19:01:59.151150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.818 [2024-07-24 19:01:59.151165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.818 [2024-07-24 19:01:59.151181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.818 [2024-07-24 19:01:59.151196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.818 [2024-07-24 19:01:59.151212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.818 [2024-07-24 19:01:59.151227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.818 [2024-07-24 19:01:59.151248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.818 [2024-07-24 19:01:59.151264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.818 [2024-07-24 19:01:59.151281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.818 [2024-07-24 19:01:59.151296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.818 [2024-07-24 19:01:59.151312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.818 [2024-07-24 19:01:59.151327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.818 [2024-07-24 19:01:59.151343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.818 [2024-07-24 19:01:59.151358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.818 [2024-07-24 19:01:59.151374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.818 [2024-07-24 19:01:59.151389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.818 [2024-07-24 19:01:59.151406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.818 [2024-07-24 19:01:59.151421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.818 [2024-07-24 19:01:59.151437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.818 [2024-07-24 19:01:59.151452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.818 [2024-07-24 19:01:59.151469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.818 [2024-07-24 19:01:59.151483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.818 [2024-07-24 19:01:59.151500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.818 [2024-07-24 19:01:59.151514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.818 [2024-07-24 19:01:59.151531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.818 [2024-07-24 19:01:59.151546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.818 [2024-07-24 19:01:59.151562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.818 [2024-07-24 19:01:59.151577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.818 [2024-07-24 19:01:59.151593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.151608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.151625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.151643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.151660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.151675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.151692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.151707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.151723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.151738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.151755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.151770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.151786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.151800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.151818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.151832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.151849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.151863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.151880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.151895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.151911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.151925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.151942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.151956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.151973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.151987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.152004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.152019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.152039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.152055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.152072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.152086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.152109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.152125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.152153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.152168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.152185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.152199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.152216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.152230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.152245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ff00 is same with the state(5) to be set 00:19:21.819 [2024-07-24 19:01:59.153492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.153515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.153536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.153552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.153568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.153584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.153600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.153615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.153632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.153646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.153663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.153678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.153695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.153714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.153731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.153746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.153763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.153779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.153795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.153810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.153827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.153842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.153858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.153872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.153889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.153903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.153919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.153934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.153952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.153967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.153983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.153998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.154014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.154028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.154045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.154059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.154076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.819 [2024-07-24 19:01:59.154091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.819 [2024-07-24 19:01:59.154118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.154978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.154992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.155008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.155023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.155039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.155054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.155070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.155085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.155109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.155126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.155142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.155164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.155188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.155204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.155222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.155237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.155254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.155268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.155285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.155299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.155315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.155335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.155352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.155367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.820 [2024-07-24 19:01:59.155384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.820 [2024-07-24 19:01:59.155399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.155416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.155430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.155447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.155461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.155478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.155492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.155509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.155524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.155540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.155554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.155570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24413f0 is same with the state(5) to be set 00:19:21.821 [2024-07-24 19:01:59.156815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.156838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.156860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.156876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.156893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.156909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.156925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.156939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.156956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.156975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.156993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.821 [2024-07-24 19:01:59.157888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.821 [2024-07-24 19:01:59.157902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.157919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.157933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.157949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.157963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.157978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.157992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.158862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.158877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24428e0 is same with the state(5) to be set 00:19:21.822 [2024-07-24 19:01:59.160140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.160164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.160187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.160204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.160221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.160241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.160259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.160275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.160293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.160307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.160323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.160339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.160356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.160370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.160387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.822 [2024-07-24 19:01:59.160402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.822 [2024-07-24 19:01:59.160419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.160433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.160450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.160465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.160481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.160496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.160513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.160527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.160543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.160557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.160574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.160588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.160605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.160619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.160639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.160654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.160671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.160686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.160703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.160717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.160734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.160748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.160765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.160779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.160796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.160810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.160826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.160841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.160857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.160871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.160888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.160902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.160918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.160933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.160949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.160963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.160980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.160995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.161011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.161029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.161046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.161060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.161077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.161091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.161115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.161143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.161159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.161174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.161190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.161205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.161221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.161235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.161252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.161266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.161282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.161297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.161313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.161328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.161344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.161359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.161375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.161389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.161405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.161420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.161436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.161455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.161473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.161488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.161504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.823 [2024-07-24 19:01:59.161519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.823 [2024-07-24 19:01:59.161536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.161551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.161567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.161582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.161599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.168775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.168854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.168871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.168888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.168902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.168919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.168933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.168950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.168965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.168982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.168997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.169013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.169028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.169045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.169059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.169087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.169109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.169131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.169146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.169163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.169179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.169194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.169209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.169226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.169240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.169257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.169271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.169288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.169303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.169320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.169334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.169352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.169366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.169383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.169397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.169414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.169428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.169444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2d74050 is same with the state(5) to be set 00:19:21.824 [2024-07-24 19:01:59.170786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.170810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.170842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.170859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.170877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.170892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.170908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.170923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.170939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.170955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.170971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.170986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.171002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.171017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.171033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.171048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.171064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.171079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.171095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.171117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.171134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.171151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.171175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.171190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.171206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.171221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.171237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.171256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.171273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.171287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.171303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.171318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.171334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.171348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.171365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.824 [2024-07-24 19:01:59.171379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.824 [2024-07-24 19:01:59.171396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.171410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.171426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.171441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.171457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.171472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.171488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.171503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.171519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.171534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.171550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.171565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.171581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.171596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.171612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.171627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.171647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.171662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.171678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.171693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.171710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.171724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.171741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.171755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.171771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.171786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.171802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.171816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.171832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.171846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.171862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.171877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.171893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.171908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.171924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.171938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.171955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.171969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.171986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.172000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.172017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.172035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.172052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.172067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.172084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.172099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.172122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.172138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.172154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.172168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.172185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.172199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.172216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.172230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.172246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.172260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.172276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.172291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.172307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.172321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.172338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.172351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.172368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.172381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.172398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.172413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.172433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.172448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.172465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.172479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.172496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.172511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.172528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.172543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.172560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.172575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.172591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.825 [2024-07-24 19:01:59.172606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.825 [2024-07-24 19:01:59.172622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.826 [2024-07-24 19:01:59.172636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.826 [2024-07-24 19:01:59.172652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.826 [2024-07-24 19:01:59.172667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.826 [2024-07-24 19:01:59.172683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.826 [2024-07-24 19:01:59.172698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.826 [2024-07-24 19:01:59.172714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.826 [2024-07-24 19:01:59.172729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.826 [2024-07-24 19:01:59.172745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.826 [2024-07-24 19:01:59.172760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.826 [2024-07-24 19:01:59.172777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.826 [2024-07-24 19:01:59.172791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.826 [2024-07-24 19:01:59.172807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.826 [2024-07-24 19:01:59.172826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.826 [2024-07-24 19:01:59.172842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504310 is same with the state(5) to be set 00:19:21.826 [2024-07-24 19:01:59.175098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:21.826 [2024-07-24 19:01:59.175139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:21.826 [2024-07-24 19:01:59.175159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:21.826 [2024-07-24 19:01:59.175177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:21.826 [2024-07-24 19:01:59.175304] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:21.826 [2024-07-24 19:01:59.175330] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:21.826 [2024-07-24 19:01:59.175350] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:21.826 [2024-07-24 19:01:59.175457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:19:21.826 [2024-07-24 19:01:59.175482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:21.826 task offset: 25856 on job bdev=Nvme8n1 fails 00:19:21.826 00:19:21.826 Latency(us) 00:19:21.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.826 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:21.826 Job: Nvme1n1 ended in about 1.22 seconds with error 00:19:21.826 Verification LBA range: start 0x0 length 0x400 00:19:21.826 Nvme1n1 : 1.22 114.36 7.15 52.65 0.00 380022.71 24272.59 338651.21 00:19:21.826 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:21.826 Job: Nvme2n1 ended in about 1.24 seconds with error 00:19:21.826 Verification LBA range: start 0x0 length 0x400 00:19:21.826 Nvme2n1 : 1.24 155.30 9.71 51.77 0.00 301855.10 49516.09 302921.96 00:19:21.826 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:21.826 Job: Nvme3n1 ended in about 1.24 seconds with error 00:19:21.826 Verification LBA range: start 0x0 length 0x400 00:19:21.826 Nvme3n1 : 1.24 154.87 9.68 51.62 0.00 298127.36 20777.34 338651.21 00:19:21.826 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:21.826 Job: Nvme4n1 ended in about 1.24 seconds with error 00:19:21.826 Verification LBA range: start 0x0 length 0x400 00:19:21.826 Nvme4n1 : 1.24 157.68 9.85 51.49 0.00 289858.45 38641.97 313796.08 00:19:21.826 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:21.826 Job: Nvme5n1 ended in about 1.25 seconds with error 00:19:21.826 Verification LBA range: start 0x0 length 0x400 00:19:21.826 Nvme5n1 : 1.25 102.70 6.42 51.35 0.00 387717.25 27379.48 372827.02 00:19:21.826 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:21.826 Job: Nvme6n1 ended in about 1.25 seconds with error 00:19:21.826 Verification LBA range: start 0x0 length 0x400 00:19:21.826 Nvme6n1 : 1.25 102.43 6.40 51.21 0.00 382736.81 22427.88 344865.00 00:19:21.826 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:21.826 Job: Nvme7n1 ended in about 1.26 seconds with error 00:19:21.826 Verification LBA range: start 0x0 length 0x400 00:19:21.826 Nvme7n1 : 1.26 152.35 9.52 50.78 0.00 285071.17 18350.08 330883.98 00:19:21.826 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:21.826 Job: Nvme8n1 ended in about 1.21 seconds with error 00:19:21.826 Verification LBA range: start 0x0 length 0x400 00:19:21.826 Nvme8n1 : 1.21 158.71 9.92 52.90 0.00 267873.94 7233.23 338651.21 00:19:21.826 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:21.826 Job: Nvme9n1 ended in about 1.26 seconds with error 00:19:21.826 Verification LBA range: start 0x0 length 0x400 00:19:21.826 Nvme9n1 : 1.26 101.30 6.33 50.65 0.00 369342.89 25049.32 385254.59 00:19:21.826 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:21.826 Job: Nvme10n1 ended in about 1.21 seconds with error 00:19:21.826 Verification LBA range: start 0x0 length 0x400 00:19:21.826 Nvme10n1 : 1.21 158.55 9.91 52.85 0.00 259232.52 11311.03 340204.66 00:19:21.826 =================================================================================================================== 00:19:21.826 Total : 1358.23 84.89 517.28 0.00 316024.93 7233.23 385254.59 00:19:21.826 [2024-07-24 19:01:59.204462] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:21.826 [2024-07-24 19:01:59.204539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:21.826 [2024-07-24 19:01:59.204848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:21.826 [2024-07-24 19:01:59.204885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x246ab50 with addr=10.0.0.2, port=4420 00:19:21.826 [2024-07-24 19:01:59.204905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x246ab50 is same with the state(5) to be set 00:19:21.826 [2024-07-24 19:01:59.205042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:21.826 [2024-07-24 19:01:59.205070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24727b0 with addr=10.0.0.2, port=4420 00:19:21.826 [2024-07-24 19:01:59.205086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24727b0 is same with the state(5) to be set 00:19:21.826 [2024-07-24 19:01:59.205232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:21.826 [2024-07-24 19:01:59.205260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2471090 with addr=10.0.0.2, port=4420 00:19:21.826 [2024-07-24 19:01:59.205277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471090 is same with the state(5) to be set 00:19:21.826 [2024-07-24 19:01:59.205402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:21.826 [2024-07-24 19:01:59.205429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2527670 with addr=10.0.0.2, port=4420 00:19:21.826 [2024-07-24 19:01:59.205445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2527670 is same with the state(5) to be set 00:19:21.826 [2024-07-24 19:01:59.207441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:21.826 [2024-07-24 19:01:59.207471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:21.826 [2024-07-24 19:01:59.207664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:21.826 [2024-07-24 19:01:59.207693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2520410 with addr=10.0.0.2, port=4420 00:19:21.826 [2024-07-24 19:01:59.207711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520410 is same with the state(5) to be set 00:19:21.826 [2024-07-24 19:01:59.207840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:21.826 [2024-07-24 19:01:59.207868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f48610 with addr=10.0.0.2, port=4420 00:19:21.826 [2024-07-24 19:01:59.207886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f48610 is same with the state(5) to be set 00:19:21.826 [2024-07-24 19:01:59.208007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:21.826 [2024-07-24 19:01:59.208034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25f51d0 with addr=10.0.0.2, port=4420 00:19:21.826 [2024-07-24 19:01:59.208064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f51d0 is same with the state(5) to be set 00:19:21.826 [2024-07-24 19:01:59.208090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246ab50 (9): Bad file descriptor 00:19:21.826 [2024-07-24 19:01:59.208122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24727b0 (9): Bad file descriptor 00:19:21.826 [2024-07-24 19:01:59.208151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2471090 (9): Bad file descriptor 00:19:21.826 [2024-07-24 19:01:59.208170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2527670 (9): Bad file descriptor 00:19:21.826 [2024-07-24 19:01:59.208237] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:21.826 [2024-07-24 19:01:59.208269] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:21.826 [2024-07-24 19:01:59.208290] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:21.826 [2024-07-24 19:01:59.208312] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:21.826 [2024-07-24 19:01:59.208331] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:21.826 [2024-07-24 19:01:59.208719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:21.826 [2024-07-24 19:01:59.208886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:21.827 [2024-07-24 19:01:59.208915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2606fd0 with addr=10.0.0.2, port=4420 00:19:21.827 [2024-07-24 19:01:59.208931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2606fd0 is same with the state(5) to be set 00:19:21.827 [2024-07-24 19:01:59.209071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:21.827 [2024-07-24 19:01:59.209098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2446830 with addr=10.0.0.2, port=4420 00:19:21.827 [2024-07-24 19:01:59.209123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2446830 is same with the state(5) to be set 00:19:21.827 [2024-07-24 19:01:59.209143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2520410 (9): Bad file descriptor 00:19:21.827 [2024-07-24 19:01:59.209162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f48610 (9): Bad file descriptor 00:19:21.827 [2024-07-24 19:01:59.209181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25f51d0 (9): Bad file descriptor 00:19:21.827 [2024-07-24 19:01:59.209197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:21.827 [2024-07-24 19:01:59.209211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:19:21.827 [2024-07-24 19:01:59.209227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:21.827 [2024-07-24 19:01:59.209247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:21.827 [2024-07-24 19:01:59.209262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:21.827 [2024-07-24 19:01:59.209275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:21.827 [2024-07-24 19:01:59.209294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:19:21.827 [2024-07-24 19:01:59.209309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:19:21.827 [2024-07-24 19:01:59.209323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:21.827 [2024-07-24 19:01:59.209339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:19:21.827 [2024-07-24 19:01:59.209358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:19:21.827 [2024-07-24 19:01:59.209372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:21.827 [2024-07-24 19:01:59.209474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:21.827 [2024-07-24 19:01:59.209495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:21.827 [2024-07-24 19:01:59.209508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:21.827 [2024-07-24 19:01:59.209520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:21.827 [2024-07-24 19:01:59.209634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:21.827 [2024-07-24 19:01:59.209660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25f65b0 with addr=10.0.0.2, port=4420 00:19:21.827 [2024-07-24 19:01:59.209677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f65b0 is same with the state(5) to be set 00:19:21.827 [2024-07-24 19:01:59.209696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2606fd0 (9): Bad file descriptor 00:19:21.827 [2024-07-24 19:01:59.209715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2446830 (9): Bad file descriptor 00:19:21.827 [2024-07-24 19:01:59.209732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:19:21.827 [2024-07-24 19:01:59.209745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:19:21.827 [2024-07-24 19:01:59.209759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:21.827 [2024-07-24 19:01:59.209777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:19:21.827 [2024-07-24 19:01:59.209791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:19:21.827 [2024-07-24 19:01:59.209805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:21.827 [2024-07-24 19:01:59.209821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:19:21.827 [2024-07-24 19:01:59.209835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:19:21.827 [2024-07-24 19:01:59.209848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:21.827 [2024-07-24 19:01:59.209885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:21.827 [2024-07-24 19:01:59.209903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:21.827 [2024-07-24 19:01:59.209915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:21.827 [2024-07-24 19:01:59.209931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25f65b0 (9): Bad file descriptor 00:19:21.827 [2024-07-24 19:01:59.209947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:19:21.827 [2024-07-24 19:01:59.209962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:19:21.827 [2024-07-24 19:01:59.209976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:21.827 [2024-07-24 19:01:59.209993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:21.827 [2024-07-24 19:01:59.210007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:21.827 [2024-07-24 19:01:59.210021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:21.827 [2024-07-24 19:01:59.210069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:21.827 [2024-07-24 19:01:59.210088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:21.827 [2024-07-24 19:01:59.210109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:21.827 [2024-07-24 19:01:59.210124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:21.827 [2024-07-24 19:01:59.210147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:21.827 [2024-07-24 19:01:59.210182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:22.393 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:19:22.393 19:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3190918 00:19:23.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3190918) - No such process 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:23.327 rmmod nvme_tcp 00:19:23.327 rmmod nvme_fabrics 00:19:23.327 rmmod nvme_keyring 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:23.327 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.268 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:25.268 00:19:25.268 real 0m7.787s 00:19:25.268 user 0m19.275s 00:19:25.268 sys 0m1.574s 00:19:25.268 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:25.268 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:25.268 ************************************ 00:19:25.268 END TEST nvmf_shutdown_tc3 00:19:25.268 ************************************ 00:19:25.268 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:19:25.268 00:19:25.268 real 0m27.313s 00:19:25.268 user 1m15.870s 00:19:25.268 sys 0m6.371s 00:19:25.268 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:25.268 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:25.268 ************************************ 00:19:25.268 END TEST nvmf_shutdown 00:19:25.268 ************************************ 00:19:25.527 19:02:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:19:25.527 00:19:25.527 real 10m35.333s 00:19:25.527 user 25m7.686s 00:19:25.527 sys 2m38.734s 00:19:25.527 19:02:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:25.527 19:02:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:25.527 ************************************ 00:19:25.527 END TEST nvmf_target_extra 00:19:25.527 ************************************ 00:19:25.527 19:02:02 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:25.527 19:02:02 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:25.527 19:02:02 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:25.527 19:02:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:25.527 ************************************ 00:19:25.527 START TEST nvmf_host 00:19:25.527 ************************************ 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:25.527 * Looking for test storage... 00:19:25.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:25.527 19:02:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.527 ************************************ 00:19:25.527 START TEST nvmf_multicontroller 00:19:25.527 ************************************ 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:25.527 * Looking for test storage... 00:19:25.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.527 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:19:25.528 19:02:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:28.060 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:28.060 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:28.060 Found net devices under 0000:09:00.0: cvl_0_0 00:19:28.060 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:28.061 Found net devices under 0000:09:00.1: cvl_0_1 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:28.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:28.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:19:28.061 00:19:28.061 --- 10.0.0.2 ping statistics --- 00:19:28.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.061 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:28.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:28.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:19:28.061 00:19:28.061 --- 10.0.0.1 ping statistics --- 00:19:28.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.061 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3193477 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3193477 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3193477 ']' 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:28.061 [2024-07-24 19:02:05.285203] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:19:28.061 [2024-07-24 19:02:05.285290] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.061 EAL: No free 2048 kB hugepages reported on node 1 00:19:28.061 [2024-07-24 19:02:05.349440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:28.061 [2024-07-24 19:02:05.457170] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.061 [2024-07-24 19:02:05.457221] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.061 [2024-07-24 19:02:05.457244] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.061 [2024-07-24 19:02:05.457255] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.061 [2024-07-24 19:02:05.457265] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.061 [2024-07-24 19:02:05.457365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.061 [2024-07-24 19:02:05.457438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:28.061 [2024-07-24 19:02:05.457441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:28.061 [2024-07-24 19:02:05.605914] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:28.061 Malloc0 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.061 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:28.320 [2024-07-24 19:02:05.667740] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:28.320 [2024-07-24 19:02:05.675649] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:28.320 Malloc1 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3193619 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3193619 /var/tmp/bdevperf.sock 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3193619 ']' 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.320 19:02:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:28.579 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:28.579 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:19:28.579 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:28.579 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.579 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:28.837 NVMe0n1 00:19:28.837 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.837 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:28.837 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:28.837 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.837 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:28.837 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.837 1 00:19:28.837 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:28.837 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:19:28.837 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:28.837 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:28.837 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.837 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:28.837 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.837 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:28.837 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.837 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:28.837 request: 00:19:28.837 { 00:19:28.837 "name": "NVMe0", 00:19:28.837 "trtype": "tcp", 00:19:28.837 "traddr": "10.0.0.2", 00:19:28.837 "adrfam": "ipv4", 00:19:28.837 "trsvcid": "4420", 00:19:28.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.837 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:28.837 "hostaddr": "10.0.0.2", 00:19:28.837 "hostsvcid": "60000", 00:19:28.838 "prchk_reftag": false, 00:19:28.838 "prchk_guard": false, 00:19:28.838 "hdgst": false, 00:19:28.838 "ddgst": false, 00:19:28.838 "method": "bdev_nvme_attach_controller", 00:19:28.838 "req_id": 1 00:19:28.838 } 00:19:28.838 Got JSON-RPC error response 00:19:28.838 response: 00:19:28.838 { 00:19:28.838 "code": -114, 00:19:28.838 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:28.838 } 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:28.838 request: 00:19:28.838 { 00:19:28.838 "name": "NVMe0", 00:19:28.838 "trtype": "tcp", 00:19:28.838 "traddr": "10.0.0.2", 00:19:28.838 "adrfam": "ipv4", 00:19:28.838 "trsvcid": "4420", 00:19:28.838 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:28.838 "hostaddr": "10.0.0.2", 00:19:28.838 "hostsvcid": "60000", 00:19:28.838 "prchk_reftag": false, 00:19:28.838 "prchk_guard": false, 00:19:28.838 "hdgst": false, 00:19:28.838 "ddgst": false, 00:19:28.838 "method": "bdev_nvme_attach_controller", 00:19:28.838 "req_id": 1 00:19:28.838 } 00:19:28.838 Got JSON-RPC error response 00:19:28.838 response: 00:19:28.838 { 00:19:28.838 "code": -114, 00:19:28.838 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:28.838 } 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:28.838 request: 00:19:28.838 { 00:19:28.838 "name": "NVMe0", 00:19:28.838 "trtype": "tcp", 00:19:28.838 "traddr": "10.0.0.2", 00:19:28.838 "adrfam": "ipv4", 00:19:28.838 "trsvcid": "4420", 00:19:28.838 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.838 "hostaddr": "10.0.0.2", 00:19:28.838 "hostsvcid": "60000", 00:19:28.838 "prchk_reftag": false, 00:19:28.838 "prchk_guard": false, 00:19:28.838 "hdgst": false, 00:19:28.838 "ddgst": false, 00:19:28.838 "multipath": "disable", 00:19:28.838 "method": "bdev_nvme_attach_controller", 00:19:28.838 "req_id": 1 00:19:28.838 } 00:19:28.838 Got JSON-RPC error response 00:19:28.838 response: 00:19:28.838 { 00:19:28.838 "code": -114, 00:19:28.838 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:19:28.838 } 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:28.838 request: 00:19:28.838 { 00:19:28.838 "name": "NVMe0", 00:19:28.838 "trtype": "tcp", 00:19:28.838 "traddr": "10.0.0.2", 00:19:28.838 "adrfam": "ipv4", 00:19:28.838 "trsvcid": "4420", 00:19:28.838 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.838 "hostaddr": "10.0.0.2", 00:19:28.838 "hostsvcid": "60000", 00:19:28.838 "prchk_reftag": false, 00:19:28.838 "prchk_guard": false, 00:19:28.838 "hdgst": false, 00:19:28.838 "ddgst": false, 00:19:28.838 "multipath": "failover", 00:19:28.838 "method": "bdev_nvme_attach_controller", 00:19:28.838 "req_id": 1 00:19:28.838 } 00:19:28.838 Got JSON-RPC error response 00:19:28.838 response: 00:19:28.838 { 00:19:28.838 "code": -114, 00:19:28.838 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:28.838 } 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.838 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:29.096 00:19:29.096 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.096 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:29.096 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.096 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:29.096 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.096 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:29.096 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.096 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:29.096 00:19:29.096 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.096 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:29.096 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.096 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:29.096 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:29.096 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.096 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:29.096 19:02:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:30.470 0 00:19:30.470 19:02:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:30.470 19:02:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.470 19:02:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:30.470 19:02:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.470 19:02:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3193619 00:19:30.470 19:02:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3193619 ']' 00:19:30.470 19:02:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3193619 00:19:30.470 19:02:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:19:30.470 19:02:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:30.470 19:02:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3193619 00:19:30.470 19:02:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:30.470 19:02:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:30.470 19:02:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3193619' 00:19:30.470 killing process with pid 3193619 00:19:30.470 19:02:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3193619 00:19:30.470 19:02:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3193619 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:19:30.470 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:19:30.470 [2024-07-24 19:02:05.779741] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:19:30.470 [2024-07-24 19:02:05.779838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193619 ] 00:19:30.470 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.470 [2024-07-24 19:02:05.839742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.470 [2024-07-24 19:02:05.949930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.470 [2024-07-24 19:02:06.556733] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 0c71e1e6-ddd0-4db8-9a0b-03250d854b4e already exists 00:19:30.470 [2024-07-24 19:02:06.556774] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:0c71e1e6-ddd0-4db8-9a0b-03250d854b4e alias for bdev NVMe1n1 00:19:30.470 [2024-07-24 19:02:06.556789] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:30.470 Running I/O for 1 seconds... 00:19:30.470 00:19:30.470 Latency(us) 00:19:30.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.470 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:30.470 NVMe0n1 : 1.00 18890.50 73.79 0.00 0.00 6765.97 4199.16 13495.56 00:19:30.470 =================================================================================================================== 00:19:30.470 Total : 18890.50 73.79 0.00 0.00 6765.97 4199.16 13495.56 00:19:30.470 Received shutdown signal, test time was about 1.000000 seconds 00:19:30.470 00:19:30.470 Latency(us) 00:19:30.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.470 =================================================================================================================== 00:19:30.470 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:30.470 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:30.470 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:30.470 rmmod nvme_tcp 00:19:30.728 rmmod nvme_fabrics 00:19:30.728 rmmod nvme_keyring 00:19:30.728 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:30.728 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:19:30.728 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:19:30.728 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3193477 ']' 00:19:30.728 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3193477 00:19:30.728 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3193477 ']' 00:19:30.728 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3193477 00:19:30.728 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:19:30.728 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:30.728 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3193477 00:19:30.728 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:30.728 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:30.728 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3193477' 00:19:30.728 killing process with pid 3193477 00:19:30.728 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3193477 00:19:30.728 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3193477 00:19:30.986 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:30.986 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:30.986 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:30.986 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:30.986 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:30.986 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.986 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.986 19:02:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:33.518 00:19:33.518 real 0m7.506s 00:19:33.518 user 0m11.768s 00:19:33.518 sys 0m2.366s 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:33.518 ************************************ 00:19:33.518 END TEST nvmf_multicontroller 00:19:33.518 ************************************ 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.518 ************************************ 00:19:33.518 START TEST nvmf_aer 00:19:33.518 ************************************ 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:33.518 * Looking for test storage... 00:19:33.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:19:33.518 19:02:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:35.417 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:35.417 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:35.417 Found net devices under 0000:09:00.0: cvl_0_0 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:35.417 Found net devices under 0000:09:00.1: cvl_0_1 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:35.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:35.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:19:35.417 00:19:35.417 --- 10.0.0.2 ping statistics --- 00:19:35.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.417 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:35.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:35.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:19:35.417 00:19:35.417 --- 10.0.0.1 ping statistics --- 00:19:35.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.417 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:35.417 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:35.418 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:35.418 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:35.418 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:35.418 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:35.418 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:35.418 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:35.418 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:35.418 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3195830 00:19:35.418 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:35.418 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3195830 00:19:35.418 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 3195830 ']' 00:19:35.418 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.418 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:35.418 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.418 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:35.418 19:02:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:35.418 [2024-07-24 19:02:12.784520] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:19:35.418 [2024-07-24 19:02:12.784589] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.418 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.418 [2024-07-24 19:02:12.847473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:35.418 [2024-07-24 19:02:12.966948] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.418 [2024-07-24 19:02:12.966995] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.418 [2024-07-24 19:02:12.967018] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.418 [2024-07-24 19:02:12.967032] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.418 [2024-07-24 19:02:12.967043] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.418 [2024-07-24 19:02:12.967126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.418 [2024-07-24 19:02:12.967182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.418 [2024-07-24 19:02:12.967228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:35.418 [2024-07-24 19:02:12.967231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:36.350 [2024-07-24 19:02:13.797897] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:36.350 Malloc0 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:36.350 [2024-07-24 19:02:13.852034] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:36.350 [ 00:19:36.350 { 00:19:36.350 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:36.350 "subtype": "Discovery", 00:19:36.350 "listen_addresses": [], 00:19:36.350 "allow_any_host": true, 00:19:36.350 "hosts": [] 00:19:36.350 }, 00:19:36.350 { 00:19:36.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.350 "subtype": "NVMe", 00:19:36.350 "listen_addresses": [ 00:19:36.350 { 00:19:36.350 "trtype": "TCP", 00:19:36.350 "adrfam": "IPv4", 00:19:36.350 "traddr": "10.0.0.2", 00:19:36.350 "trsvcid": "4420" 00:19:36.350 } 00:19:36.350 ], 00:19:36.350 "allow_any_host": true, 00:19:36.350 "hosts": [], 00:19:36.350 "serial_number": "SPDK00000000000001", 00:19:36.350 "model_number": "SPDK bdev Controller", 00:19:36.350 "max_namespaces": 2, 00:19:36.350 "min_cntlid": 1, 00:19:36.350 "max_cntlid": 65519, 00:19:36.350 "namespaces": [ 00:19:36.350 { 00:19:36.350 "nsid": 1, 00:19:36.350 "bdev_name": "Malloc0", 00:19:36.350 "name": "Malloc0", 00:19:36.350 "nguid": "CB2BEB215E574CD08DE0956E948C5A83", 00:19:36.350 "uuid": "cb2beb21-5e57-4cd0-8de0-956e948c5a83" 00:19:36.350 } 00:19:36.350 ] 00:19:36.350 } 00:19:36.350 ] 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3195987 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:19:36.350 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:36.350 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.607 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:36.607 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:19:36.607 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:19:36.607 19:02:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:36.608 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:36.608 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:19:36.608 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:19:36.608 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:36.608 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:36.608 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:36.608 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:19:36.608 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:36.608 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.608 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:36.865 Malloc1 00:19:36.865 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.865 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:36.866 Asynchronous Event Request test 00:19:36.866 Attaching to 10.0.0.2 00:19:36.866 Attached to 10.0.0.2 00:19:36.866 Registering asynchronous event callbacks... 00:19:36.866 Starting namespace attribute notice tests for all controllers... 00:19:36.866 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:36.866 aer_cb - Changed Namespace 00:19:36.866 Cleaning up... 00:19:36.866 [ 00:19:36.866 { 00:19:36.866 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:36.866 "subtype": "Discovery", 00:19:36.866 "listen_addresses": [], 00:19:36.866 "allow_any_host": true, 00:19:36.866 "hosts": [] 00:19:36.866 }, 00:19:36.866 { 00:19:36.866 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.866 "subtype": "NVMe", 00:19:36.866 "listen_addresses": [ 00:19:36.866 { 00:19:36.866 "trtype": "TCP", 00:19:36.866 "adrfam": "IPv4", 00:19:36.866 "traddr": "10.0.0.2", 00:19:36.866 "trsvcid": "4420" 00:19:36.866 } 00:19:36.866 ], 00:19:36.866 "allow_any_host": true, 00:19:36.866 "hosts": [], 00:19:36.866 "serial_number": "SPDK00000000000001", 00:19:36.866 "model_number": "SPDK bdev Controller", 00:19:36.866 "max_namespaces": 2, 00:19:36.866 "min_cntlid": 1, 00:19:36.866 "max_cntlid": 65519, 00:19:36.866 "namespaces": [ 00:19:36.866 { 00:19:36.866 "nsid": 1, 00:19:36.866 "bdev_name": "Malloc0", 00:19:36.866 "name": "Malloc0", 00:19:36.866 "nguid": "CB2BEB215E574CD08DE0956E948C5A83", 00:19:36.866 "uuid": "cb2beb21-5e57-4cd0-8de0-956e948c5a83" 00:19:36.866 }, 00:19:36.866 { 00:19:36.866 "nsid": 2, 00:19:36.866 "bdev_name": "Malloc1", 00:19:36.866 "name": "Malloc1", 00:19:36.866 "nguid": "2FE9A242902C4AA0B893547809A5FF52", 00:19:36.866 "uuid": "2fe9a242-902c-4aa0-b893-547809a5ff52" 00:19:36.866 } 00:19:36.866 ] 00:19:36.866 } 00:19:36.866 ] 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3195987 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:36.866 rmmod nvme_tcp 00:19:36.866 rmmod nvme_fabrics 00:19:36.866 rmmod nvme_keyring 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3195830 ']' 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3195830 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 3195830 ']' 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 3195830 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3195830 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3195830' 00:19:36.866 killing process with pid 3195830 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 3195830 00:19:36.866 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 3195830 00:19:37.124 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:37.124 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:37.124 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:37.124 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:37.124 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:37.124 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.124 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:37.124 19:02:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.654 19:02:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:39.654 00:19:39.654 real 0m6.144s 00:19:39.654 user 0m7.541s 00:19:39.654 sys 0m1.861s 00:19:39.654 19:02:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:39.654 19:02:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:39.654 ************************************ 00:19:39.654 END TEST nvmf_aer 00:19:39.654 ************************************ 00:19:39.654 19:02:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:39.654 19:02:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:39.654 19:02:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:39.654 19:02:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.654 ************************************ 00:19:39.654 START TEST nvmf_async_init 00:19:39.654 ************************************ 00:19:39.654 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:39.654 * Looking for test storage... 00:19:39.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:39.654 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.654 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:19:39.654 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.654 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.654 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.654 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=5ea80f6871a94caa84ec637c7b5e8b86 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:19:39.655 19:02:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:41.553 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:41.553 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:41.553 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:41.554 Found net devices under 0000:09:00.0: cvl_0_0 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:41.554 Found net devices under 0000:09:00.1: cvl_0_1 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:41.554 19:02:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:41.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:41.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:19:41.554 00:19:41.554 --- 10.0.0.2 ping statistics --- 00:19:41.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.554 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:41.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:41.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:19:41.554 00:19:41.554 --- 10.0.0.1 ping statistics --- 00:19:41.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.554 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3197930 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3197930 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 3197930 ']' 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:41.554 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:41.554 [2024-07-24 19:02:19.101928] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:19:41.554 [2024-07-24 19:02:19.102029] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.554 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.812 [2024-07-24 19:02:19.166733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.812 [2024-07-24 19:02:19.273894] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.812 [2024-07-24 19:02:19.273946] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.812 [2024-07-24 19:02:19.273969] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.812 [2024-07-24 19:02:19.273980] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.812 [2024-07-24 19:02:19.273989] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.812 [2024-07-24 19:02:19.274016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.812 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:41.812 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:19:41.812 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:41.812 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:41.812 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:41.812 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.812 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:19:41.812 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.812 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:41.812 [2024-07-24 19:02:19.411658] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:42.069 null0 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5ea80f6871a94caa84ec637c7b5e8b86 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:42.069 [2024-07-24 19:02:19.451911] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.069 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:42.326 nvme0n1 00:19:42.326 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.326 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:42.326 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.326 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:42.326 [ 00:19:42.326 { 00:19:42.326 "name": "nvme0n1", 00:19:42.326 "aliases": [ 00:19:42.326 "5ea80f68-71a9-4caa-84ec-637c7b5e8b86" 00:19:42.326 ], 00:19:42.326 "product_name": "NVMe disk", 00:19:42.326 "block_size": 512, 00:19:42.326 "num_blocks": 2097152, 00:19:42.326 "uuid": "5ea80f68-71a9-4caa-84ec-637c7b5e8b86", 00:19:42.326 "assigned_rate_limits": { 00:19:42.326 "rw_ios_per_sec": 0, 00:19:42.326 "rw_mbytes_per_sec": 0, 00:19:42.326 "r_mbytes_per_sec": 0, 00:19:42.326 "w_mbytes_per_sec": 0 00:19:42.326 }, 00:19:42.326 "claimed": false, 00:19:42.326 "zoned": false, 00:19:42.326 "supported_io_types": { 00:19:42.326 "read": true, 00:19:42.326 "write": true, 00:19:42.326 "unmap": false, 00:19:42.326 "flush": true, 00:19:42.326 "reset": true, 00:19:42.326 "nvme_admin": true, 00:19:42.326 "nvme_io": true, 00:19:42.326 "nvme_io_md": false, 00:19:42.326 "write_zeroes": true, 00:19:42.326 "zcopy": false, 00:19:42.326 "get_zone_info": false, 00:19:42.326 "zone_management": false, 00:19:42.326 "zone_append": false, 00:19:42.326 "compare": true, 00:19:42.326 "compare_and_write": true, 00:19:42.326 "abort": true, 00:19:42.326 "seek_hole": false, 00:19:42.326 "seek_data": false, 00:19:42.326 "copy": true, 00:19:42.326 "nvme_iov_md": false 00:19:42.326 }, 00:19:42.326 "memory_domains": [ 00:19:42.326 { 00:19:42.326 "dma_device_id": "system", 00:19:42.326 "dma_device_type": 1 00:19:42.326 } 00:19:42.326 ], 00:19:42.326 "driver_specific": { 00:19:42.326 "nvme": [ 00:19:42.326 { 00:19:42.326 "trid": { 00:19:42.326 "trtype": "TCP", 00:19:42.326 "adrfam": "IPv4", 00:19:42.326 "traddr": "10.0.0.2", 00:19:42.326 "trsvcid": "4420", 00:19:42.326 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:42.326 }, 00:19:42.326 "ctrlr_data": { 00:19:42.326 "cntlid": 1, 00:19:42.326 "vendor_id": "0x8086", 00:19:42.326 "model_number": "SPDK bdev Controller", 00:19:42.326 "serial_number": "00000000000000000000", 00:19:42.326 "firmware_revision": "24.09", 00:19:42.326 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:42.326 "oacs": { 00:19:42.326 "security": 0, 00:19:42.326 "format": 0, 00:19:42.326 "firmware": 0, 00:19:42.326 "ns_manage": 0 00:19:42.326 }, 00:19:42.326 "multi_ctrlr": true, 00:19:42.326 "ana_reporting": false 00:19:42.326 }, 00:19:42.326 "vs": { 00:19:42.326 "nvme_version": "1.3" 00:19:42.326 }, 00:19:42.326 "ns_data": { 00:19:42.326 "id": 1, 00:19:42.326 "can_share": true 00:19:42.326 } 00:19:42.326 } 00:19:42.326 ], 00:19:42.326 "mp_policy": "active_passive" 00:19:42.326 } 00:19:42.326 } 00:19:42.326 ] 00:19:42.326 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.326 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:42.326 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.326 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:42.326 [2024-07-24 19:02:19.705022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:42.326 [2024-07-24 19:02:19.705123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd1e210 (9): Bad file descriptor 00:19:42.326 [2024-07-24 19:02:19.847266] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:42.326 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.326 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:42.326 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.326 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:42.326 [ 00:19:42.326 { 00:19:42.326 "name": "nvme0n1", 00:19:42.326 "aliases": [ 00:19:42.326 "5ea80f68-71a9-4caa-84ec-637c7b5e8b86" 00:19:42.326 ], 00:19:42.326 "product_name": "NVMe disk", 00:19:42.326 "block_size": 512, 00:19:42.326 "num_blocks": 2097152, 00:19:42.326 "uuid": "5ea80f68-71a9-4caa-84ec-637c7b5e8b86", 00:19:42.326 "assigned_rate_limits": { 00:19:42.326 "rw_ios_per_sec": 0, 00:19:42.326 "rw_mbytes_per_sec": 0, 00:19:42.326 "r_mbytes_per_sec": 0, 00:19:42.326 "w_mbytes_per_sec": 0 00:19:42.326 }, 00:19:42.326 "claimed": false, 00:19:42.326 "zoned": false, 00:19:42.326 "supported_io_types": { 00:19:42.326 "read": true, 00:19:42.326 "write": true, 00:19:42.326 "unmap": false, 00:19:42.326 "flush": true, 00:19:42.326 "reset": true, 00:19:42.326 "nvme_admin": true, 00:19:42.326 "nvme_io": true, 00:19:42.326 "nvme_io_md": false, 00:19:42.326 "write_zeroes": true, 00:19:42.326 "zcopy": false, 00:19:42.326 "get_zone_info": false, 00:19:42.326 "zone_management": false, 00:19:42.326 "zone_append": false, 00:19:42.326 "compare": true, 00:19:42.326 "compare_and_write": true, 00:19:42.326 "abort": true, 00:19:42.326 "seek_hole": false, 00:19:42.326 "seek_data": false, 00:19:42.326 "copy": true, 00:19:42.326 "nvme_iov_md": false 00:19:42.326 }, 00:19:42.326 "memory_domains": [ 00:19:42.326 { 00:19:42.326 "dma_device_id": "system", 00:19:42.326 "dma_device_type": 1 00:19:42.326 } 00:19:42.326 ], 00:19:42.326 "driver_specific": { 00:19:42.326 "nvme": [ 00:19:42.326 { 00:19:42.326 "trid": { 00:19:42.326 "trtype": "TCP", 00:19:42.326 "adrfam": "IPv4", 00:19:42.326 "traddr": "10.0.0.2", 00:19:42.326 "trsvcid": "4420", 00:19:42.326 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:42.326 }, 00:19:42.326 "ctrlr_data": { 00:19:42.326 "cntlid": 2, 00:19:42.326 "vendor_id": "0x8086", 00:19:42.326 "model_number": "SPDK bdev Controller", 00:19:42.326 "serial_number": "00000000000000000000", 00:19:42.326 "firmware_revision": "24.09", 00:19:42.326 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:42.326 "oacs": { 00:19:42.326 "security": 0, 00:19:42.326 "format": 0, 00:19:42.326 "firmware": 0, 00:19:42.326 "ns_manage": 0 00:19:42.326 }, 00:19:42.326 "multi_ctrlr": true, 00:19:42.326 "ana_reporting": false 00:19:42.326 }, 00:19:42.326 "vs": { 00:19:42.326 "nvme_version": "1.3" 00:19:42.326 }, 00:19:42.326 "ns_data": { 00:19:42.326 "id": 1, 00:19:42.326 "can_share": true 00:19:42.326 } 00:19:42.326 } 00:19:42.326 ], 00:19:42.326 "mp_policy": "active_passive" 00:19:42.326 } 00:19:42.326 } 00:19:42.326 ] 00:19:42.326 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.326 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.326 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.326 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:42.326 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.326 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:19:42.326 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.PVAgYUGrto 00:19:42.326 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:42.326 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.PVAgYUGrto 00:19:42.327 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:42.327 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.327 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:42.327 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.327 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:19:42.327 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.327 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:42.327 [2024-07-24 19:02:19.905818] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:42.327 [2024-07-24 19:02:19.905944] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:42.327 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.327 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PVAgYUGrto 00:19:42.327 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.327 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:42.327 [2024-07-24 19:02:19.913844] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:42.327 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.327 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PVAgYUGrto 00:19:42.327 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.327 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:42.327 [2024-07-24 19:02:19.921869] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:42.327 [2024-07-24 19:02:19.921927] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:42.583 nvme0n1 00:19:42.583 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.583 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:42.583 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.583 19:02:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:42.583 [ 00:19:42.583 { 00:19:42.583 "name": "nvme0n1", 00:19:42.583 "aliases": [ 00:19:42.583 "5ea80f68-71a9-4caa-84ec-637c7b5e8b86" 00:19:42.583 ], 00:19:42.583 "product_name": "NVMe disk", 00:19:42.583 "block_size": 512, 00:19:42.583 "num_blocks": 2097152, 00:19:42.583 "uuid": "5ea80f68-71a9-4caa-84ec-637c7b5e8b86", 00:19:42.583 "assigned_rate_limits": { 00:19:42.583 "rw_ios_per_sec": 0, 00:19:42.583 "rw_mbytes_per_sec": 0, 00:19:42.583 "r_mbytes_per_sec": 0, 00:19:42.583 "w_mbytes_per_sec": 0 00:19:42.583 }, 00:19:42.583 "claimed": false, 00:19:42.583 "zoned": false, 00:19:42.583 "supported_io_types": { 00:19:42.583 "read": true, 00:19:42.583 "write": true, 00:19:42.583 "unmap": false, 00:19:42.583 "flush": true, 00:19:42.583 "reset": true, 00:19:42.583 "nvme_admin": true, 00:19:42.583 "nvme_io": true, 00:19:42.583 "nvme_io_md": false, 00:19:42.583 "write_zeroes": true, 00:19:42.583 "zcopy": false, 00:19:42.583 "get_zone_info": false, 00:19:42.583 "zone_management": false, 00:19:42.583 "zone_append": false, 00:19:42.583 "compare": true, 00:19:42.583 "compare_and_write": true, 00:19:42.583 "abort": true, 00:19:42.583 "seek_hole": false, 00:19:42.583 "seek_data": false, 00:19:42.583 "copy": true, 00:19:42.583 "nvme_iov_md": false 00:19:42.583 }, 00:19:42.583 "memory_domains": [ 00:19:42.583 { 00:19:42.583 "dma_device_id": "system", 00:19:42.583 "dma_device_type": 1 00:19:42.583 } 00:19:42.583 ], 00:19:42.584 "driver_specific": { 00:19:42.584 "nvme": [ 00:19:42.584 { 00:19:42.584 "trid": { 00:19:42.584 "trtype": "TCP", 00:19:42.584 "adrfam": "IPv4", 00:19:42.584 "traddr": "10.0.0.2", 00:19:42.584 "trsvcid": "4421", 00:19:42.584 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:42.584 }, 00:19:42.584 "ctrlr_data": { 00:19:42.584 "cntlid": 3, 00:19:42.584 "vendor_id": "0x8086", 00:19:42.584 "model_number": "SPDK bdev Controller", 00:19:42.584 "serial_number": "00000000000000000000", 00:19:42.584 "firmware_revision": "24.09", 00:19:42.584 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:42.584 "oacs": { 00:19:42.584 "security": 0, 00:19:42.584 "format": 0, 00:19:42.584 "firmware": 0, 00:19:42.584 "ns_manage": 0 00:19:42.584 }, 00:19:42.584 "multi_ctrlr": true, 00:19:42.584 "ana_reporting": false 00:19:42.584 }, 00:19:42.584 "vs": { 00:19:42.584 "nvme_version": "1.3" 00:19:42.584 }, 00:19:42.584 "ns_data": { 00:19:42.584 "id": 1, 00:19:42.584 "can_share": true 00:19:42.584 } 00:19:42.584 } 00:19:42.584 ], 00:19:42.584 "mp_policy": "active_passive" 00:19:42.584 } 00:19:42.584 } 00:19:42.584 ] 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.PVAgYUGrto 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:42.584 rmmod nvme_tcp 00:19:42.584 rmmod nvme_fabrics 00:19:42.584 rmmod nvme_keyring 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3197930 ']' 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3197930 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 3197930 ']' 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 3197930 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3197930 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3197930' 00:19:42.584 killing process with pid 3197930 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 3197930 00:19:42.584 [2024-07-24 19:02:20.107177] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:42.584 [2024-07-24 19:02:20.107221] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:42.584 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 3197930 00:19:42.842 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:42.842 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:42.842 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:42.842 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:42.842 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:42.842 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.842 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:42.842 19:02:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:45.369 00:19:45.369 real 0m5.645s 00:19:45.369 user 0m2.138s 00:19:45.369 sys 0m1.894s 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:45.369 ************************************ 00:19:45.369 END TEST nvmf_async_init 00:19:45.369 ************************************ 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.369 ************************************ 00:19:45.369 START TEST dma 00:19:45.369 ************************************ 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:45.369 * Looking for test storage... 00:19:45.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:19:45.369 00:19:45.369 real 0m0.062s 00:19:45.369 user 0m0.029s 00:19:45.369 sys 0m0.038s 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:45.369 ************************************ 00:19:45.369 END TEST dma 00:19:45.369 ************************************ 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.369 ************************************ 00:19:45.369 START TEST nvmf_identify 00:19:45.369 ************************************ 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:45.369 * Looking for test storage... 00:19:45.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.369 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:19:45.370 19:02:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:47.326 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:47.326 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:47.326 Found net devices under 0000:09:00.0: cvl_0_0 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:47.326 Found net devices under 0000:09:00.1: cvl_0_1 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.326 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:47.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:19:47.327 00:19:47.327 --- 10.0.0.2 ping statistics --- 00:19:47.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.327 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:47.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:19:47.327 00:19:47.327 --- 10.0.0.1 ping statistics --- 00:19:47.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.327 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3200052 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3200052 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 3200052 ']' 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:47.327 19:02:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:47.327 [2024-07-24 19:02:24.727866] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:19:47.327 [2024-07-24 19:02:24.727943] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.327 EAL: No free 2048 kB hugepages reported on node 1 00:19:47.327 [2024-07-24 19:02:24.797138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:47.327 [2024-07-24 19:02:24.917577] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.327 [2024-07-24 19:02:24.917636] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.327 [2024-07-24 19:02:24.917652] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.327 [2024-07-24 19:02:24.917666] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.327 [2024-07-24 19:02:24.917678] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.327 [2024-07-24 19:02:24.917758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.327 [2024-07-24 19:02:24.917824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.327 [2024-07-24 19:02:24.917849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:47.327 [2024-07-24 19:02:24.917851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:47.585 [2024-07-24 19:02:25.060624] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:47.585 Malloc0 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:47.585 [2024-07-24 19:02:25.136737] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:47.585 [ 00:19:47.585 { 00:19:47.585 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:47.585 "subtype": "Discovery", 00:19:47.585 "listen_addresses": [ 00:19:47.585 { 00:19:47.585 "trtype": "TCP", 00:19:47.585 "adrfam": "IPv4", 00:19:47.585 "traddr": "10.0.0.2", 00:19:47.585 "trsvcid": "4420" 00:19:47.585 } 00:19:47.585 ], 00:19:47.585 "allow_any_host": true, 00:19:47.585 "hosts": [] 00:19:47.585 }, 00:19:47.585 { 00:19:47.585 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.585 "subtype": "NVMe", 00:19:47.585 "listen_addresses": [ 00:19:47.585 { 00:19:47.585 "trtype": "TCP", 00:19:47.585 "adrfam": "IPv4", 00:19:47.585 "traddr": "10.0.0.2", 00:19:47.585 "trsvcid": "4420" 00:19:47.585 } 00:19:47.585 ], 00:19:47.585 "allow_any_host": true, 00:19:47.585 "hosts": [], 00:19:47.585 "serial_number": "SPDK00000000000001", 00:19:47.585 "model_number": "SPDK bdev Controller", 00:19:47.585 "max_namespaces": 32, 00:19:47.585 "min_cntlid": 1, 00:19:47.585 "max_cntlid": 65519, 00:19:47.585 "namespaces": [ 00:19:47.585 { 00:19:47.585 "nsid": 1, 00:19:47.585 "bdev_name": "Malloc0", 00:19:47.585 "name": "Malloc0", 00:19:47.585 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:47.585 "eui64": "ABCDEF0123456789", 00:19:47.585 "uuid": "e7f4d4ad-6660-45f6-8570-c7a1bc397215" 00:19:47.585 } 00:19:47.585 ] 00:19:47.585 } 00:19:47.585 ] 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.585 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:47.585 [2024-07-24 19:02:25.180911] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:19:47.585 [2024-07-24 19:02:25.180961] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3200202 ] 00:19:47.848 EAL: No free 2048 kB hugepages reported on node 1 00:19:47.848 [2024-07-24 19:02:25.219589] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:19:47.848 [2024-07-24 19:02:25.219647] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:47.848 [2024-07-24 19:02:25.219657] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:47.848 [2024-07-24 19:02:25.219675] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:47.848 [2024-07-24 19:02:25.219688] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:47.848 [2024-07-24 19:02:25.223146] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:19:47.848 [2024-07-24 19:02:25.223205] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2229540 0 00:19:47.848 [2024-07-24 19:02:25.231114] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:47.848 [2024-07-24 19:02:25.231140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:47.848 [2024-07-24 19:02:25.231149] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:47.848 [2024-07-24 19:02:25.231155] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:47.848 [2024-07-24 19:02:25.231220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.848 [2024-07-24 19:02:25.231233] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.848 [2024-07-24 19:02:25.231240] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2229540) 00:19:47.848 [2024-07-24 19:02:25.231258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:47.848 [2024-07-24 19:02:25.231285] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22893c0, cid 0, qid 0 00:19:47.848 [2024-07-24 19:02:25.237119] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.848 [2024-07-24 19:02:25.237137] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.848 [2024-07-24 19:02:25.237144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.848 [2024-07-24 19:02:25.237152] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22893c0) on tqpair=0x2229540 00:19:47.848 [2024-07-24 19:02:25.237166] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:47.848 [2024-07-24 19:02:25.237193] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:19:47.848 [2024-07-24 19:02:25.237203] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:19:47.848 [2024-07-24 19:02:25.237228] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.848 [2024-07-24 19:02:25.237237] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.848 [2024-07-24 19:02:25.237244] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2229540) 00:19:47.848 [2024-07-24 19:02:25.237255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.848 [2024-07-24 19:02:25.237280] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22893c0, cid 0, qid 0 00:19:47.848 [2024-07-24 19:02:25.237459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.848 [2024-07-24 19:02:25.237472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.848 [2024-07-24 19:02:25.237479] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.848 [2024-07-24 19:02:25.237486] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22893c0) on tqpair=0x2229540 00:19:47.848 [2024-07-24 19:02:25.237498] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:19:47.848 [2024-07-24 19:02:25.237513] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:19:47.848 [2024-07-24 19:02:25.237525] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.848 [2024-07-24 19:02:25.237533] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.848 [2024-07-24 19:02:25.237539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2229540) 00:19:47.848 [2024-07-24 19:02:25.237550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.848 [2024-07-24 19:02:25.237572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22893c0, cid 0, qid 0 00:19:47.848 [2024-07-24 19:02:25.237693] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.848 [2024-07-24 19:02:25.237708] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.848 [2024-07-24 19:02:25.237715] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.848 [2024-07-24 19:02:25.237722] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22893c0) on tqpair=0x2229540 00:19:47.849 [2024-07-24 19:02:25.237730] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:19:47.849 [2024-07-24 19:02:25.237745] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:19:47.849 [2024-07-24 19:02:25.237757] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.237765] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.237771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2229540) 00:19:47.849 [2024-07-24 19:02:25.237782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.849 [2024-07-24 19:02:25.237804] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22893c0, cid 0, qid 0 00:19:47.849 [2024-07-24 19:02:25.237921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.849 [2024-07-24 19:02:25.237936] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.849 [2024-07-24 19:02:25.237943] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.237950] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22893c0) on tqpair=0x2229540 00:19:47.849 [2024-07-24 19:02:25.237958] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:47.849 [2024-07-24 19:02:25.237975] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.237985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.237991] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2229540) 00:19:47.849 [2024-07-24 19:02:25.238002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.849 [2024-07-24 19:02:25.238023] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22893c0, cid 0, qid 0 00:19:47.849 [2024-07-24 19:02:25.238190] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.849 [2024-07-24 19:02:25.238204] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.849 [2024-07-24 19:02:25.238214] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.238222] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22893c0) on tqpair=0x2229540 00:19:47.849 [2024-07-24 19:02:25.238230] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:19:47.849 [2024-07-24 19:02:25.238239] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:19:47.849 [2024-07-24 19:02:25.238252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:47.849 [2024-07-24 19:02:25.238362] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:19:47.849 [2024-07-24 19:02:25.238370] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:47.849 [2024-07-24 19:02:25.238384] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.238392] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.238398] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2229540) 00:19:47.849 [2024-07-24 19:02:25.238409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.849 [2024-07-24 19:02:25.238431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22893c0, cid 0, qid 0 00:19:47.849 [2024-07-24 19:02:25.238600] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.849 [2024-07-24 19:02:25.238615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.849 [2024-07-24 19:02:25.238622] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.238628] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22893c0) on tqpair=0x2229540 00:19:47.849 [2024-07-24 19:02:25.238637] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:47.849 [2024-07-24 19:02:25.238653] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.238663] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.238669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2229540) 00:19:47.849 [2024-07-24 19:02:25.238680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.849 [2024-07-24 19:02:25.238701] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22893c0, cid 0, qid 0 00:19:47.849 [2024-07-24 19:02:25.238817] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.849 [2024-07-24 19:02:25.238829] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.849 [2024-07-24 19:02:25.238835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.238842] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22893c0) on tqpair=0x2229540 00:19:47.849 [2024-07-24 19:02:25.238849] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:47.849 [2024-07-24 19:02:25.238858] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:19:47.849 [2024-07-24 19:02:25.238871] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:19:47.849 [2024-07-24 19:02:25.238885] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:19:47.849 [2024-07-24 19:02:25.238900] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.238912] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2229540) 00:19:47.849 [2024-07-24 19:02:25.238924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.849 [2024-07-24 19:02:25.238945] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22893c0, cid 0, qid 0 00:19:47.849 [2024-07-24 19:02:25.239095] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:47.849 [2024-07-24 19:02:25.239120] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:47.849 [2024-07-24 19:02:25.239128] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.239135] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2229540): datao=0, datal=4096, cccid=0 00:19:47.849 [2024-07-24 19:02:25.239142] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22893c0) on tqpair(0x2229540): expected_datao=0, payload_size=4096 00:19:47.849 [2024-07-24 19:02:25.239150] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.239169] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.239178] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.239293] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.849 [2024-07-24 19:02:25.239304] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.849 [2024-07-24 19:02:25.239311] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.239318] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22893c0) on tqpair=0x2229540 00:19:47.849 [2024-07-24 19:02:25.239329] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:19:47.849 [2024-07-24 19:02:25.239337] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:19:47.849 [2024-07-24 19:02:25.239345] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:19:47.849 [2024-07-24 19:02:25.239353] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:19:47.849 [2024-07-24 19:02:25.239361] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:19:47.849 [2024-07-24 19:02:25.239370] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:19:47.849 [2024-07-24 19:02:25.239384] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:19:47.849 [2024-07-24 19:02:25.239401] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.239410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.239416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2229540) 00:19:47.849 [2024-07-24 19:02:25.239427] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:47.849 [2024-07-24 19:02:25.239450] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22893c0, cid 0, qid 0 00:19:47.849 [2024-07-24 19:02:25.239616] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.849 [2024-07-24 19:02:25.239631] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.849 [2024-07-24 19:02:25.239638] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.239644] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22893c0) on tqpair=0x2229540 00:19:47.849 [2024-07-24 19:02:25.239656] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.239663] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.239670] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2229540) 00:19:47.849 [2024-07-24 19:02:25.239684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.849 [2024-07-24 19:02:25.239695] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.239702] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.239708] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2229540) 00:19:47.849 [2024-07-24 19:02:25.239717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.849 [2024-07-24 19:02:25.239726] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.239733] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.239739] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2229540) 00:19:47.849 [2024-07-24 19:02:25.239748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.849 [2024-07-24 19:02:25.239758] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.239764] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.849 [2024-07-24 19:02:25.239771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2229540) 00:19:47.850 [2024-07-24 19:02:25.239779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.850 [2024-07-24 19:02:25.239788] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:19:47.850 [2024-07-24 19:02:25.239808] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:47.850 [2024-07-24 19:02:25.239821] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.239828] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2229540) 00:19:47.850 [2024-07-24 19:02:25.239854] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.850 [2024-07-24 19:02:25.239876] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22893c0, cid 0, qid 0 00:19:47.850 [2024-07-24 19:02:25.239887] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289540, cid 1, qid 0 00:19:47.850 [2024-07-24 19:02:25.239895] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22896c0, cid 2, qid 0 00:19:47.850 [2024-07-24 19:02:25.239917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289840, cid 3, qid 0 00:19:47.850 [2024-07-24 19:02:25.239925] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22899c0, cid 4, qid 0 00:19:47.850 [2024-07-24 19:02:25.240152] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.850 [2024-07-24 19:02:25.240167] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.850 [2024-07-24 19:02:25.240174] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.240181] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22899c0) on tqpair=0x2229540 00:19:47.850 [2024-07-24 19:02:25.240190] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:19:47.850 [2024-07-24 19:02:25.240199] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:19:47.850 [2024-07-24 19:02:25.240217] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.240227] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2229540) 00:19:47.850 [2024-07-24 19:02:25.240238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.850 [2024-07-24 19:02:25.240263] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22899c0, cid 4, qid 0 00:19:47.850 [2024-07-24 19:02:25.240397] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:47.850 [2024-07-24 19:02:25.240412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:47.850 [2024-07-24 19:02:25.240419] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.240426] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2229540): datao=0, datal=4096, cccid=4 00:19:47.850 [2024-07-24 19:02:25.240433] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22899c0) on tqpair(0x2229540): expected_datao=0, payload_size=4096 00:19:47.850 [2024-07-24 19:02:25.240441] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.240451] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.240459] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.240479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.850 [2024-07-24 19:02:25.240490] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.850 [2024-07-24 19:02:25.240496] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.240503] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22899c0) on tqpair=0x2229540 00:19:47.850 [2024-07-24 19:02:25.240522] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:19:47.850 [2024-07-24 19:02:25.240557] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.240569] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2229540) 00:19:47.850 [2024-07-24 19:02:25.240580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.850 [2024-07-24 19:02:25.240591] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.240599] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.240605] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2229540) 00:19:47.850 [2024-07-24 19:02:25.240614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.850 [2024-07-24 19:02:25.240640] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22899c0, cid 4, qid 0 00:19:47.850 [2024-07-24 19:02:25.240652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289b40, cid 5, qid 0 00:19:47.850 [2024-07-24 19:02:25.240817] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:47.850 [2024-07-24 19:02:25.240829] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:47.850 [2024-07-24 19:02:25.240835] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.240842] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2229540): datao=0, datal=1024, cccid=4 00:19:47.850 [2024-07-24 19:02:25.240849] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22899c0) on tqpair(0x2229540): expected_datao=0, payload_size=1024 00:19:47.850 [2024-07-24 19:02:25.240857] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.240866] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.240874] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.240882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.850 [2024-07-24 19:02:25.240891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.850 [2024-07-24 19:02:25.240898] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.240905] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2289b40) on tqpair=0x2229540 00:19:47.850 [2024-07-24 19:02:25.281252] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.850 [2024-07-24 19:02:25.281272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.850 [2024-07-24 19:02:25.281284] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.281292] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22899c0) on tqpair=0x2229540 00:19:47.850 [2024-07-24 19:02:25.281310] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.281321] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2229540) 00:19:47.850 [2024-07-24 19:02:25.281332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.850 [2024-07-24 19:02:25.281363] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22899c0, cid 4, qid 0 00:19:47.850 [2024-07-24 19:02:25.281498] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:47.850 [2024-07-24 19:02:25.281513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:47.850 [2024-07-24 19:02:25.281519] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.281526] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2229540): datao=0, datal=3072, cccid=4 00:19:47.850 [2024-07-24 19:02:25.281533] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22899c0) on tqpair(0x2229540): expected_datao=0, payload_size=3072 00:19:47.850 [2024-07-24 19:02:25.281541] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.281560] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.281569] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.281681] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.850 [2024-07-24 19:02:25.281693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.850 [2024-07-24 19:02:25.281700] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.281706] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22899c0) on tqpair=0x2229540 00:19:47.850 [2024-07-24 19:02:25.281721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.281730] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2229540) 00:19:47.850 [2024-07-24 19:02:25.281741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.850 [2024-07-24 19:02:25.281770] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22899c0, cid 4, qid 0 00:19:47.850 [2024-07-24 19:02:25.281906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:47.850 [2024-07-24 19:02:25.281921] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:47.850 [2024-07-24 19:02:25.281928] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.281934] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2229540): datao=0, datal=8, cccid=4 00:19:47.850 [2024-07-24 19:02:25.281941] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22899c0) on tqpair(0x2229540): expected_datao=0, payload_size=8 00:19:47.850 [2024-07-24 19:02:25.281949] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.281959] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.281966] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.322247] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.850 [2024-07-24 19:02:25.322266] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.850 [2024-07-24 19:02:25.322274] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.850 [2024-07-24 19:02:25.322281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22899c0) on tqpair=0x2229540 00:19:47.850 ===================================================== 00:19:47.850 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:47.850 ===================================================== 00:19:47.850 Controller Capabilities/Features 00:19:47.850 ================================ 00:19:47.850 Vendor ID: 0000 00:19:47.850 Subsystem Vendor ID: 0000 00:19:47.850 Serial Number: .................... 00:19:47.850 Model Number: ........................................ 00:19:47.850 Firmware Version: 24.09 00:19:47.850 Recommended Arb Burst: 0 00:19:47.850 IEEE OUI Identifier: 00 00 00 00:19:47.850 Multi-path I/O 00:19:47.850 May have multiple subsystem ports: No 00:19:47.850 May have multiple controllers: No 00:19:47.850 Associated with SR-IOV VF: No 00:19:47.850 Max Data Transfer Size: 131072 00:19:47.850 Max Number of Namespaces: 0 00:19:47.850 Max Number of I/O Queues: 1024 00:19:47.850 NVMe Specification Version (VS): 1.3 00:19:47.851 NVMe Specification Version (Identify): 1.3 00:19:47.851 Maximum Queue Entries: 128 00:19:47.851 Contiguous Queues Required: Yes 00:19:47.851 Arbitration Mechanisms Supported 00:19:47.851 Weighted Round Robin: Not Supported 00:19:47.851 Vendor Specific: Not Supported 00:19:47.851 Reset Timeout: 15000 ms 00:19:47.851 Doorbell Stride: 4 bytes 00:19:47.851 NVM Subsystem Reset: Not Supported 00:19:47.851 Command Sets Supported 00:19:47.851 NVM Command Set: Supported 00:19:47.851 Boot Partition: Not Supported 00:19:47.851 Memory Page Size Minimum: 4096 bytes 00:19:47.851 Memory Page Size Maximum: 4096 bytes 00:19:47.851 Persistent Memory Region: Not Supported 00:19:47.851 Optional Asynchronous Events Supported 00:19:47.851 Namespace Attribute Notices: Not Supported 00:19:47.851 Firmware Activation Notices: Not Supported 00:19:47.851 ANA Change Notices: Not Supported 00:19:47.851 PLE Aggregate Log Change Notices: Not Supported 00:19:47.851 LBA Status Info Alert Notices: Not Supported 00:19:47.851 EGE Aggregate Log Change Notices: Not Supported 00:19:47.851 Normal NVM Subsystem Shutdown event: Not Supported 00:19:47.851 Zone Descriptor Change Notices: Not Supported 00:19:47.851 Discovery Log Change Notices: Supported 00:19:47.851 Controller Attributes 00:19:47.851 128-bit Host Identifier: Not Supported 00:19:47.851 Non-Operational Permissive Mode: Not Supported 00:19:47.851 NVM Sets: Not Supported 00:19:47.851 Read Recovery Levels: Not Supported 00:19:47.851 Endurance Groups: Not Supported 00:19:47.851 Predictable Latency Mode: Not Supported 00:19:47.851 Traffic Based Keep ALive: Not Supported 00:19:47.851 Namespace Granularity: Not Supported 00:19:47.851 SQ Associations: Not Supported 00:19:47.851 UUID List: Not Supported 00:19:47.851 Multi-Domain Subsystem: Not Supported 00:19:47.851 Fixed Capacity Management: Not Supported 00:19:47.851 Variable Capacity Management: Not Supported 00:19:47.851 Delete Endurance Group: Not Supported 00:19:47.851 Delete NVM Set: Not Supported 00:19:47.851 Extended LBA Formats Supported: Not Supported 00:19:47.851 Flexible Data Placement Supported: Not Supported 00:19:47.851 00:19:47.851 Controller Memory Buffer Support 00:19:47.851 ================================ 00:19:47.851 Supported: No 00:19:47.851 00:19:47.851 Persistent Memory Region Support 00:19:47.851 ================================ 00:19:47.851 Supported: No 00:19:47.851 00:19:47.851 Admin Command Set Attributes 00:19:47.851 ============================ 00:19:47.851 Security Send/Receive: Not Supported 00:19:47.851 Format NVM: Not Supported 00:19:47.851 Firmware Activate/Download: Not Supported 00:19:47.851 Namespace Management: Not Supported 00:19:47.851 Device Self-Test: Not Supported 00:19:47.851 Directives: Not Supported 00:19:47.851 NVMe-MI: Not Supported 00:19:47.851 Virtualization Management: Not Supported 00:19:47.851 Doorbell Buffer Config: Not Supported 00:19:47.851 Get LBA Status Capability: Not Supported 00:19:47.851 Command & Feature Lockdown Capability: Not Supported 00:19:47.851 Abort Command Limit: 1 00:19:47.851 Async Event Request Limit: 4 00:19:47.851 Number of Firmware Slots: N/A 00:19:47.851 Firmware Slot 1 Read-Only: N/A 00:19:47.851 Firmware Activation Without Reset: N/A 00:19:47.851 Multiple Update Detection Support: N/A 00:19:47.851 Firmware Update Granularity: No Information Provided 00:19:47.851 Per-Namespace SMART Log: No 00:19:47.851 Asymmetric Namespace Access Log Page: Not Supported 00:19:47.851 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:47.851 Command Effects Log Page: Not Supported 00:19:47.851 Get Log Page Extended Data: Supported 00:19:47.851 Telemetry Log Pages: Not Supported 00:19:47.851 Persistent Event Log Pages: Not Supported 00:19:47.851 Supported Log Pages Log Page: May Support 00:19:47.851 Commands Supported & Effects Log Page: Not Supported 00:19:47.851 Feature Identifiers & Effects Log Page:May Support 00:19:47.851 NVMe-MI Commands & Effects Log Page: May Support 00:19:47.851 Data Area 4 for Telemetry Log: Not Supported 00:19:47.851 Error Log Page Entries Supported: 128 00:19:47.851 Keep Alive: Not Supported 00:19:47.851 00:19:47.851 NVM Command Set Attributes 00:19:47.851 ========================== 00:19:47.851 Submission Queue Entry Size 00:19:47.851 Max: 1 00:19:47.851 Min: 1 00:19:47.851 Completion Queue Entry Size 00:19:47.851 Max: 1 00:19:47.851 Min: 1 00:19:47.851 Number of Namespaces: 0 00:19:47.851 Compare Command: Not Supported 00:19:47.851 Write Uncorrectable Command: Not Supported 00:19:47.851 Dataset Management Command: Not Supported 00:19:47.851 Write Zeroes Command: Not Supported 00:19:47.851 Set Features Save Field: Not Supported 00:19:47.851 Reservations: Not Supported 00:19:47.851 Timestamp: Not Supported 00:19:47.851 Copy: Not Supported 00:19:47.851 Volatile Write Cache: Not Present 00:19:47.851 Atomic Write Unit (Normal): 1 00:19:47.851 Atomic Write Unit (PFail): 1 00:19:47.851 Atomic Compare & Write Unit: 1 00:19:47.851 Fused Compare & Write: Supported 00:19:47.851 Scatter-Gather List 00:19:47.851 SGL Command Set: Supported 00:19:47.851 SGL Keyed: Supported 00:19:47.851 SGL Bit Bucket Descriptor: Not Supported 00:19:47.851 SGL Metadata Pointer: Not Supported 00:19:47.851 Oversized SGL: Not Supported 00:19:47.851 SGL Metadata Address: Not Supported 00:19:47.851 SGL Offset: Supported 00:19:47.851 Transport SGL Data Block: Not Supported 00:19:47.851 Replay Protected Memory Block: Not Supported 00:19:47.851 00:19:47.851 Firmware Slot Information 00:19:47.851 ========================= 00:19:47.851 Active slot: 0 00:19:47.851 00:19:47.851 00:19:47.851 Error Log 00:19:47.851 ========= 00:19:47.851 00:19:47.851 Active Namespaces 00:19:47.851 ================= 00:19:47.851 Discovery Log Page 00:19:47.851 ================== 00:19:47.851 Generation Counter: 2 00:19:47.851 Number of Records: 2 00:19:47.851 Record Format: 0 00:19:47.851 00:19:47.851 Discovery Log Entry 0 00:19:47.851 ---------------------- 00:19:47.851 Transport Type: 3 (TCP) 00:19:47.851 Address Family: 1 (IPv4) 00:19:47.851 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:47.851 Entry Flags: 00:19:47.851 Duplicate Returned Information: 1 00:19:47.851 Explicit Persistent Connection Support for Discovery: 1 00:19:47.851 Transport Requirements: 00:19:47.851 Secure Channel: Not Required 00:19:47.851 Port ID: 0 (0x0000) 00:19:47.851 Controller ID: 65535 (0xffff) 00:19:47.851 Admin Max SQ Size: 128 00:19:47.851 Transport Service Identifier: 4420 00:19:47.851 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:47.851 Transport Address: 10.0.0.2 00:19:47.851 Discovery Log Entry 1 00:19:47.851 ---------------------- 00:19:47.851 Transport Type: 3 (TCP) 00:19:47.851 Address Family: 1 (IPv4) 00:19:47.851 Subsystem Type: 2 (NVM Subsystem) 00:19:47.851 Entry Flags: 00:19:47.851 Duplicate Returned Information: 0 00:19:47.851 Explicit Persistent Connection Support for Discovery: 0 00:19:47.851 Transport Requirements: 00:19:47.851 Secure Channel: Not Required 00:19:47.851 Port ID: 0 (0x0000) 00:19:47.851 Controller ID: 65535 (0xffff) 00:19:47.851 Admin Max SQ Size: 128 00:19:47.851 Transport Service Identifier: 4420 00:19:47.851 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:47.851 Transport Address: 10.0.0.2 [2024-07-24 19:02:25.322396] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:19:47.851 [2024-07-24 19:02:25.322418] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22893c0) on tqpair=0x2229540 00:19:47.851 [2024-07-24 19:02:25.322433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.851 [2024-07-24 19:02:25.322443] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2289540) on tqpair=0x2229540 00:19:47.851 [2024-07-24 19:02:25.322451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.851 [2024-07-24 19:02:25.322459] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22896c0) on tqpair=0x2229540 00:19:47.851 [2024-07-24 19:02:25.322467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.851 [2024-07-24 19:02:25.322475] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2289840) on tqpair=0x2229540 00:19:47.851 [2024-07-24 19:02:25.322483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.851 [2024-07-24 19:02:25.322501] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.851 [2024-07-24 19:02:25.322511] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.322518] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2229540) 00:19:47.852 [2024-07-24 19:02:25.322529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.852 [2024-07-24 19:02:25.322554] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289840, cid 3, qid 0 00:19:47.852 [2024-07-24 19:02:25.322728] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.852 [2024-07-24 19:02:25.322741] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.852 [2024-07-24 19:02:25.322747] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.322754] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2289840) on tqpair=0x2229540 00:19:47.852 [2024-07-24 19:02:25.322766] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.322774] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.322780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2229540) 00:19:47.852 [2024-07-24 19:02:25.322791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.852 [2024-07-24 19:02:25.322818] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289840, cid 3, qid 0 00:19:47.852 [2024-07-24 19:02:25.322958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.852 [2024-07-24 19:02:25.322970] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.852 [2024-07-24 19:02:25.322977] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.322983] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2289840) on tqpair=0x2229540 00:19:47.852 [2024-07-24 19:02:25.322992] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:19:47.852 [2024-07-24 19:02:25.323000] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:19:47.852 [2024-07-24 19:02:25.323016] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.323025] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.323032] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2229540) 00:19:47.852 [2024-07-24 19:02:25.323042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.852 [2024-07-24 19:02:25.323063] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289840, cid 3, qid 0 00:19:47.852 [2024-07-24 19:02:25.323191] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.852 [2024-07-24 19:02:25.323208] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.852 [2024-07-24 19:02:25.323218] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.323226] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2289840) on tqpair=0x2229540 00:19:47.852 [2024-07-24 19:02:25.323243] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.323254] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.323260] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2229540) 00:19:47.852 [2024-07-24 19:02:25.323271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.852 [2024-07-24 19:02:25.323292] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289840, cid 3, qid 0 00:19:47.852 [2024-07-24 19:02:25.323433] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.852 [2024-07-24 19:02:25.323448] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.852 [2024-07-24 19:02:25.323455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.323462] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2289840) on tqpair=0x2229540 00:19:47.852 [2024-07-24 19:02:25.323478] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.323488] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.323495] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2229540) 00:19:47.852 [2024-07-24 19:02:25.323505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.852 [2024-07-24 19:02:25.323526] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289840, cid 3, qid 0 00:19:47.852 [2024-07-24 19:02:25.323643] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.852 [2024-07-24 19:02:25.323655] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.852 [2024-07-24 19:02:25.323661] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.323668] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2289840) on tqpair=0x2229540 00:19:47.852 [2024-07-24 19:02:25.323684] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.323694] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.323700] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2229540) 00:19:47.852 [2024-07-24 19:02:25.323711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.852 [2024-07-24 19:02:25.323731] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289840, cid 3, qid 0 00:19:47.852 [2024-07-24 19:02:25.323846] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.852 [2024-07-24 19:02:25.323858] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.852 [2024-07-24 19:02:25.323864] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.323871] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2289840) on tqpair=0x2229540 00:19:47.852 [2024-07-24 19:02:25.323886] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.323896] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.323903] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2229540) 00:19:47.852 [2024-07-24 19:02:25.323913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.852 [2024-07-24 19:02:25.323934] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289840, cid 3, qid 0 00:19:47.852 [2024-07-24 19:02:25.324051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.852 [2024-07-24 19:02:25.324067] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.852 [2024-07-24 19:02:25.324073] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.324084] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2289840) on tqpair=0x2229540 00:19:47.852 [2024-07-24 19:02:25.324110] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.324121] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.324128] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2229540) 00:19:47.852 [2024-07-24 19:02:25.324138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.852 [2024-07-24 19:02:25.324160] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289840, cid 3, qid 0 00:19:47.852 [2024-07-24 19:02:25.324277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.852 [2024-07-24 19:02:25.324289] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.852 [2024-07-24 19:02:25.324296] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.324302] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2289840) on tqpair=0x2229540 00:19:47.852 [2024-07-24 19:02:25.324318] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.324328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.324335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2229540) 00:19:47.852 [2024-07-24 19:02:25.324345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.852 [2024-07-24 19:02:25.324366] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289840, cid 3, qid 0 00:19:47.852 [2024-07-24 19:02:25.324485] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.852 [2024-07-24 19:02:25.324498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.852 [2024-07-24 19:02:25.324504] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.324511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2289840) on tqpair=0x2229540 00:19:47.852 [2024-07-24 19:02:25.324527] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.324537] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.324544] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2229540) 00:19:47.852 [2024-07-24 19:02:25.324554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.852 [2024-07-24 19:02:25.324574] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289840, cid 3, qid 0 00:19:47.852 [2024-07-24 19:02:25.324743] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.852 [2024-07-24 19:02:25.324758] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.852 [2024-07-24 19:02:25.324765] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.324771] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2289840) on tqpair=0x2229540 00:19:47.852 [2024-07-24 19:02:25.324788] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.324798] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.324804] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2229540) 00:19:47.852 [2024-07-24 19:02:25.324815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.852 [2024-07-24 19:02:25.324836] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289840, cid 3, qid 0 00:19:47.852 [2024-07-24 19:02:25.324949] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.852 [2024-07-24 19:02:25.324964] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.852 [2024-07-24 19:02:25.324971] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.324977] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2289840) on tqpair=0x2229540 00:19:47.852 [2024-07-24 19:02:25.324999] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.325010] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.852 [2024-07-24 19:02:25.325017] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2229540) 00:19:47.852 [2024-07-24 19:02:25.325027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.852 [2024-07-24 19:02:25.325048] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289840, cid 3, qid 0 00:19:47.852 [2024-07-24 19:02:25.325216] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.853 [2024-07-24 19:02:25.325232] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.853 [2024-07-24 19:02:25.325238] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.853 [2024-07-24 19:02:25.325245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2289840) on tqpair=0x2229540 00:19:47.853 [2024-07-24 19:02:25.325262] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.853 [2024-07-24 19:02:25.325272] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.853 [2024-07-24 19:02:25.325278] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2229540) 00:19:47.853 [2024-07-24 19:02:25.325289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.853 [2024-07-24 19:02:25.325310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289840, cid 3, qid 0 00:19:47.853 [2024-07-24 19:02:25.325427] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.853 [2024-07-24 19:02:25.325442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.853 [2024-07-24 19:02:25.325448] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.853 [2024-07-24 19:02:25.325455] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2289840) on tqpair=0x2229540 00:19:47.853 [2024-07-24 19:02:25.325471] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.853 [2024-07-24 19:02:25.325481] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.853 [2024-07-24 19:02:25.325488] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2229540) 00:19:47.853 [2024-07-24 19:02:25.325498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.854 [2024-07-24 19:02:25.325519] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289840, cid 3, qid 0 00:19:47.854 [2024-07-24 19:02:25.325634] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.854 [2024-07-24 19:02:25.325646] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.854 [2024-07-24 19:02:25.325653] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.854 [2024-07-24 19:02:25.325660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2289840) on tqpair=0x2229540 00:19:47.854 [2024-07-24 19:02:25.325675] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.854 [2024-07-24 19:02:25.325685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.854 [2024-07-24 19:02:25.325692] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2229540) 00:19:47.854 [2024-07-24 19:02:25.325702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.854 [2024-07-24 19:02:25.325723] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289840, cid 3, qid 0 00:19:47.854 [2024-07-24 19:02:25.325838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.854 [2024-07-24 19:02:25.325853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.854 [2024-07-24 19:02:25.325860] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.854 [2024-07-24 19:02:25.325866] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2289840) on tqpair=0x2229540 00:19:47.854 [2024-07-24 19:02:25.325883] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.854 [2024-07-24 19:02:25.325897] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.854 [2024-07-24 19:02:25.325904] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2229540) 00:19:47.854 [2024-07-24 19:02:25.325915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.854 [2024-07-24 19:02:25.325936] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289840, cid 3, qid 0 00:19:47.854 [2024-07-24 19:02:25.326056] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.854 [2024-07-24 19:02:25.326072] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.854 [2024-07-24 19:02:25.326078] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.854 [2024-07-24 19:02:25.326085] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2289840) on tqpair=0x2229540 00:19:47.854 [2024-07-24 19:02:25.330107] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.854 [2024-07-24 19:02:25.330122] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.854 [2024-07-24 19:02:25.330128] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2229540) 00:19:47.854 [2024-07-24 19:02:25.330139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.854 [2024-07-24 19:02:25.330162] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289840, cid 3, qid 0 00:19:47.854 [2024-07-24 19:02:25.330320] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.854 [2024-07-24 19:02:25.330335] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.854 [2024-07-24 19:02:25.330342] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.854 [2024-07-24 19:02:25.330348] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2289840) on tqpair=0x2229540 00:19:47.854 [2024-07-24 19:02:25.330362] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:19:47.854 00:19:47.854 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:47.854 [2024-07-24 19:02:25.366056] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:19:47.854 [2024-07-24 19:02:25.366117] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3200204 ] 00:19:47.854 EAL: No free 2048 kB hugepages reported on node 1 00:19:47.854 [2024-07-24 19:02:25.400915] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:19:47.854 [2024-07-24 19:02:25.400964] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:47.854 [2024-07-24 19:02:25.400973] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:47.854 [2024-07-24 19:02:25.400988] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:47.854 [2024-07-24 19:02:25.400999] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:47.854 [2024-07-24 19:02:25.401234] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:19:47.854 [2024-07-24 19:02:25.401272] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x125d540 0 00:19:47.854 [2024-07-24 19:02:25.408117] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:47.854 [2024-07-24 19:02:25.408140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:47.854 [2024-07-24 19:02:25.408152] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:47.854 [2024-07-24 19:02:25.408158] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:47.854 [2024-07-24 19:02:25.408197] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.854 [2024-07-24 19:02:25.408209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.854 [2024-07-24 19:02:25.408215] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x125d540) 00:19:47.854 [2024-07-24 19:02:25.408229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:47.854 [2024-07-24 19:02:25.408255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd3c0, cid 0, qid 0 00:19:47.854 [2024-07-24 19:02:25.416118] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.854 [2024-07-24 19:02:25.416135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.854 [2024-07-24 19:02:25.416142] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.854 [2024-07-24 19:02:25.416149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd3c0) on tqpair=0x125d540 00:19:47.854 [2024-07-24 19:02:25.416162] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:47.854 [2024-07-24 19:02:25.416172] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:19:47.854 [2024-07-24 19:02:25.416181] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:19:47.854 [2024-07-24 19:02:25.416199] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.854 [2024-07-24 19:02:25.416208] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.854 [2024-07-24 19:02:25.416215] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x125d540) 00:19:47.854 [2024-07-24 19:02:25.416225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.854 [2024-07-24 19:02:25.416248] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd3c0, cid 0, qid 0 00:19:47.854 [2024-07-24 19:02:25.416433] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.854 [2024-07-24 19:02:25.416448] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.854 [2024-07-24 19:02:25.416454] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.854 [2024-07-24 19:02:25.416461] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd3c0) on tqpair=0x125d540 00:19:47.854 [2024-07-24 19:02:25.416473] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:19:47.854 [2024-07-24 19:02:25.416487] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:19:47.854 [2024-07-24 19:02:25.416499] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.854 [2024-07-24 19:02:25.416506] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.854 [2024-07-24 19:02:25.416512] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x125d540) 00:19:47.854 [2024-07-24 19:02:25.416523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.854 [2024-07-24 19:02:25.416544] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd3c0, cid 0, qid 0 00:19:47.854 [2024-07-24 19:02:25.416687] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.854 [2024-07-24 19:02:25.416698] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.854 [2024-07-24 19:02:25.416704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.854 [2024-07-24 19:02:25.416711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd3c0) on tqpair=0x125d540 00:19:47.854 [2024-07-24 19:02:25.416719] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:19:47.854 [2024-07-24 19:02:25.416732] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:19:47.854 [2024-07-24 19:02:25.416748] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.854 [2024-07-24 19:02:25.416756] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.854 [2024-07-24 19:02:25.416762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x125d540) 00:19:47.854 [2024-07-24 19:02:25.416772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.854 [2024-07-24 19:02:25.416793] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd3c0, cid 0, qid 0 00:19:47.854 [2024-07-24 19:02:25.416927] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.854 [2024-07-24 19:02:25.416938] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.854 [2024-07-24 19:02:25.416944] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.854 [2024-07-24 19:02:25.416951] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd3c0) on tqpair=0x125d540 00:19:47.855 [2024-07-24 19:02:25.416958] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:47.855 [2024-07-24 19:02:25.416974] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.416982] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.416989] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x125d540) 00:19:47.855 [2024-07-24 19:02:25.416999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.855 [2024-07-24 19:02:25.417018] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd3c0, cid 0, qid 0 00:19:47.855 [2024-07-24 19:02:25.417161] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.855 [2024-07-24 19:02:25.417177] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.855 [2024-07-24 19:02:25.417183] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.417190] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd3c0) on tqpair=0x125d540 00:19:47.855 [2024-07-24 19:02:25.417198] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:19:47.855 [2024-07-24 19:02:25.417206] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:19:47.855 [2024-07-24 19:02:25.417219] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:47.855 [2024-07-24 19:02:25.417329] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:19:47.855 [2024-07-24 19:02:25.417336] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:47.855 [2024-07-24 19:02:25.417348] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.417356] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.417362] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x125d540) 00:19:47.855 [2024-07-24 19:02:25.417372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.855 [2024-07-24 19:02:25.417407] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd3c0, cid 0, qid 0 00:19:47.855 [2024-07-24 19:02:25.417590] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.855 [2024-07-24 19:02:25.417605] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.855 [2024-07-24 19:02:25.417611] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.417618] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd3c0) on tqpair=0x125d540 00:19:47.855 [2024-07-24 19:02:25.417629] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:47.855 [2024-07-24 19:02:25.417646] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.417656] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.417662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x125d540) 00:19:47.855 [2024-07-24 19:02:25.417672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.855 [2024-07-24 19:02:25.417693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd3c0, cid 0, qid 0 00:19:47.855 [2024-07-24 19:02:25.417812] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.855 [2024-07-24 19:02:25.417826] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.855 [2024-07-24 19:02:25.417832] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.417839] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd3c0) on tqpair=0x125d540 00:19:47.855 [2024-07-24 19:02:25.417846] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:47.855 [2024-07-24 19:02:25.417854] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:19:47.855 [2024-07-24 19:02:25.417867] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:19:47.855 [2024-07-24 19:02:25.417880] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:19:47.855 [2024-07-24 19:02:25.417893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.417901] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x125d540) 00:19:47.855 [2024-07-24 19:02:25.417911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.855 [2024-07-24 19:02:25.417932] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd3c0, cid 0, qid 0 00:19:47.855 [2024-07-24 19:02:25.418108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:47.855 [2024-07-24 19:02:25.418125] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:47.855 [2024-07-24 19:02:25.418132] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.418138] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x125d540): datao=0, datal=4096, cccid=0 00:19:47.855 [2024-07-24 19:02:25.418146] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12bd3c0) on tqpair(0x125d540): expected_datao=0, payload_size=4096 00:19:47.855 [2024-07-24 19:02:25.418153] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.418164] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.418171] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.418198] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.855 [2024-07-24 19:02:25.418209] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.855 [2024-07-24 19:02:25.418216] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.418222] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd3c0) on tqpair=0x125d540 00:19:47.855 [2024-07-24 19:02:25.418233] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:19:47.855 [2024-07-24 19:02:25.418241] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:19:47.855 [2024-07-24 19:02:25.418249] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:19:47.855 [2024-07-24 19:02:25.418255] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:19:47.855 [2024-07-24 19:02:25.418267] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:19:47.855 [2024-07-24 19:02:25.418275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:19:47.855 [2024-07-24 19:02:25.418289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:19:47.855 [2024-07-24 19:02:25.418305] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.418313] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.418319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x125d540) 00:19:47.855 [2024-07-24 19:02:25.418330] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:47.855 [2024-07-24 19:02:25.418352] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd3c0, cid 0, qid 0 00:19:47.855 [2024-07-24 19:02:25.418518] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.855 [2024-07-24 19:02:25.418530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.855 [2024-07-24 19:02:25.418536] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.418543] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd3c0) on tqpair=0x125d540 00:19:47.855 [2024-07-24 19:02:25.418553] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.418560] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.418566] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x125d540) 00:19:47.855 [2024-07-24 19:02:25.418575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.855 [2024-07-24 19:02:25.418585] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.418592] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.418598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x125d540) 00:19:47.855 [2024-07-24 19:02:25.418606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.855 [2024-07-24 19:02:25.418616] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.418622] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.418628] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x125d540) 00:19:47.855 [2024-07-24 19:02:25.418636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.855 [2024-07-24 19:02:25.418660] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.418666] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.418672] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x125d540) 00:19:47.855 [2024-07-24 19:02:25.418680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.855 [2024-07-24 19:02:25.418688] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:47.855 [2024-07-24 19:02:25.418706] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:47.855 [2024-07-24 19:02:25.418718] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.855 [2024-07-24 19:02:25.418724] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x125d540) 00:19:47.855 [2024-07-24 19:02:25.418734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.855 [2024-07-24 19:02:25.418759] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd3c0, cid 0, qid 0 00:19:47.855 [2024-07-24 19:02:25.418784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd540, cid 1, qid 0 00:19:47.855 [2024-07-24 19:02:25.418792] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd6c0, cid 2, qid 0 00:19:47.855 [2024-07-24 19:02:25.418799] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd840, cid 3, qid 0 00:19:47.855 [2024-07-24 19:02:25.418806] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd9c0, cid 4, qid 0 00:19:47.855 [2024-07-24 19:02:25.418973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.856 [2024-07-24 19:02:25.418988] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.856 [2024-07-24 19:02:25.418994] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.419000] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd9c0) on tqpair=0x125d540 00:19:47.856 [2024-07-24 19:02:25.419008] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:19:47.856 [2024-07-24 19:02:25.419016] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:47.856 [2024-07-24 19:02:25.419034] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:19:47.856 [2024-07-24 19:02:25.419061] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:47.856 [2024-07-24 19:02:25.419072] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.419080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.419086] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x125d540) 00:19:47.856 [2024-07-24 19:02:25.419097] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:47.856 [2024-07-24 19:02:25.419126] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd9c0, cid 4, qid 0 00:19:47.856 [2024-07-24 19:02:25.419281] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.856 [2024-07-24 19:02:25.419296] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.856 [2024-07-24 19:02:25.419303] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.419309] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd9c0) on tqpair=0x125d540 00:19:47.856 [2024-07-24 19:02:25.419378] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:19:47.856 [2024-07-24 19:02:25.419397] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:47.856 [2024-07-24 19:02:25.419426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.419434] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x125d540) 00:19:47.856 [2024-07-24 19:02:25.419444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.856 [2024-07-24 19:02:25.419479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd9c0, cid 4, qid 0 00:19:47.856 [2024-07-24 19:02:25.419707] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:47.856 [2024-07-24 19:02:25.419719] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:47.856 [2024-07-24 19:02:25.419725] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.419731] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x125d540): datao=0, datal=4096, cccid=4 00:19:47.856 [2024-07-24 19:02:25.419743] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12bd9c0) on tqpair(0x125d540): expected_datao=0, payload_size=4096 00:19:47.856 [2024-07-24 19:02:25.419750] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.419771] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.419780] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.419873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.856 [2024-07-24 19:02:25.419884] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.856 [2024-07-24 19:02:25.419890] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.419896] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd9c0) on tqpair=0x125d540 00:19:47.856 [2024-07-24 19:02:25.419911] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:19:47.856 [2024-07-24 19:02:25.419931] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:19:47.856 [2024-07-24 19:02:25.419947] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:19:47.856 [2024-07-24 19:02:25.419960] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.419967] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x125d540) 00:19:47.856 [2024-07-24 19:02:25.419978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.856 [2024-07-24 19:02:25.419998] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd9c0, cid 4, qid 0 00:19:47.856 [2024-07-24 19:02:25.424116] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:47.856 [2024-07-24 19:02:25.424133] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:47.856 [2024-07-24 19:02:25.424140] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.424146] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x125d540): datao=0, datal=4096, cccid=4 00:19:47.856 [2024-07-24 19:02:25.424153] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12bd9c0) on tqpair(0x125d540): expected_datao=0, payload_size=4096 00:19:47.856 [2024-07-24 19:02:25.424161] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.424171] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.424178] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.424187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.856 [2024-07-24 19:02:25.424195] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.856 [2024-07-24 19:02:25.424202] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.424208] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd9c0) on tqpair=0x125d540 00:19:47.856 [2024-07-24 19:02:25.424231] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:47.856 [2024-07-24 19:02:25.424250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:47.856 [2024-07-24 19:02:25.424265] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.424273] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x125d540) 00:19:47.856 [2024-07-24 19:02:25.424284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.856 [2024-07-24 19:02:25.424307] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd9c0, cid 4, qid 0 00:19:47.856 [2024-07-24 19:02:25.424492] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:47.856 [2024-07-24 19:02:25.424512] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:47.856 [2024-07-24 19:02:25.424519] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.424525] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x125d540): datao=0, datal=4096, cccid=4 00:19:47.856 [2024-07-24 19:02:25.424532] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12bd9c0) on tqpair(0x125d540): expected_datao=0, payload_size=4096 00:19:47.856 [2024-07-24 19:02:25.424540] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.424549] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.424556] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.424585] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.856 [2024-07-24 19:02:25.424595] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.856 [2024-07-24 19:02:25.424601] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.424608] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd9c0) on tqpair=0x125d540 00:19:47.856 [2024-07-24 19:02:25.424620] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:47.856 [2024-07-24 19:02:25.424634] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:19:47.856 [2024-07-24 19:02:25.424648] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:19:47.856 [2024-07-24 19:02:25.424660] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:47.856 [2024-07-24 19:02:25.424669] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:47.856 [2024-07-24 19:02:25.424677] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:19:47.856 [2024-07-24 19:02:25.424685] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:19:47.856 [2024-07-24 19:02:25.424692] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:19:47.856 [2024-07-24 19:02:25.424700] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:19:47.856 [2024-07-24 19:02:25.424718] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.424726] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x125d540) 00:19:47.856 [2024-07-24 19:02:25.424737] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.856 [2024-07-24 19:02:25.424747] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.424754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.424760] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x125d540) 00:19:47.856 [2024-07-24 19:02:25.424783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.856 [2024-07-24 19:02:25.424808] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd9c0, cid 4, qid 0 00:19:47.856 [2024-07-24 19:02:25.424818] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bdb40, cid 5, qid 0 00:19:47.856 [2024-07-24 19:02:25.425006] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.856 [2024-07-24 19:02:25.425021] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.856 [2024-07-24 19:02:25.425027] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.425034] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd9c0) on tqpair=0x125d540 00:19:47.856 [2024-07-24 19:02:25.425047] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.856 [2024-07-24 19:02:25.425057] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.856 [2024-07-24 19:02:25.425063] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.425070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bdb40) on tqpair=0x125d540 00:19:47.856 [2024-07-24 19:02:25.425100] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.856 [2024-07-24 19:02:25.425118] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x125d540) 00:19:47.857 [2024-07-24 19:02:25.425129] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.857 [2024-07-24 19:02:25.425151] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bdb40, cid 5, qid 0 00:19:47.857 [2024-07-24 19:02:25.425315] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.857 [2024-07-24 19:02:25.425327] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.857 [2024-07-24 19:02:25.425333] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.425340] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bdb40) on tqpair=0x125d540 00:19:47.857 [2024-07-24 19:02:25.425355] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.425364] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x125d540) 00:19:47.857 [2024-07-24 19:02:25.425375] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.857 [2024-07-24 19:02:25.425396] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bdb40, cid 5, qid 0 00:19:47.857 [2024-07-24 19:02:25.425535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.857 [2024-07-24 19:02:25.425550] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.857 [2024-07-24 19:02:25.425556] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.425563] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bdb40) on tqpair=0x125d540 00:19:47.857 [2024-07-24 19:02:25.425578] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.425587] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x125d540) 00:19:47.857 [2024-07-24 19:02:25.425597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.857 [2024-07-24 19:02:25.425617] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bdb40, cid 5, qid 0 00:19:47.857 [2024-07-24 19:02:25.425751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.857 [2024-07-24 19:02:25.425763] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.857 [2024-07-24 19:02:25.425769] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.425775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bdb40) on tqpair=0x125d540 00:19:47.857 [2024-07-24 19:02:25.425799] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.425809] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x125d540) 00:19:47.857 [2024-07-24 19:02:25.425820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.857 [2024-07-24 19:02:25.425831] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.425839] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x125d540) 00:19:47.857 [2024-07-24 19:02:25.425848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.857 [2024-07-24 19:02:25.425863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.425870] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x125d540) 00:19:47.857 [2024-07-24 19:02:25.425880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.857 [2024-07-24 19:02:25.425906] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.425914] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x125d540) 00:19:47.857 [2024-07-24 19:02:25.425923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.857 [2024-07-24 19:02:25.425944] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bdb40, cid 5, qid 0 00:19:47.857 [2024-07-24 19:02:25.425954] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd9c0, cid 4, qid 0 00:19:47.857 [2024-07-24 19:02:25.425977] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bdcc0, cid 6, qid 0 00:19:47.857 [2024-07-24 19:02:25.425984] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bde40, cid 7, qid 0 00:19:47.857 [2024-07-24 19:02:25.426306] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:47.857 [2024-07-24 19:02:25.426323] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:47.857 [2024-07-24 19:02:25.426329] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.426336] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x125d540): datao=0, datal=8192, cccid=5 00:19:47.857 [2024-07-24 19:02:25.426343] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12bdb40) on tqpair(0x125d540): expected_datao=0, payload_size=8192 00:19:47.857 [2024-07-24 19:02:25.426351] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.426361] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.426369] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.426378] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:47.857 [2024-07-24 19:02:25.426386] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:47.857 [2024-07-24 19:02:25.426393] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.426399] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x125d540): datao=0, datal=512, cccid=4 00:19:47.857 [2024-07-24 19:02:25.426406] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12bd9c0) on tqpair(0x125d540): expected_datao=0, payload_size=512 00:19:47.857 [2024-07-24 19:02:25.426414] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.426423] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.426430] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.426454] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:47.857 [2024-07-24 19:02:25.426462] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:47.857 [2024-07-24 19:02:25.426468] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.426474] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x125d540): datao=0, datal=512, cccid=6 00:19:47.857 [2024-07-24 19:02:25.426482] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12bdcc0) on tqpair(0x125d540): expected_datao=0, payload_size=512 00:19:47.857 [2024-07-24 19:02:25.426489] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.426497] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.426504] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.426513] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:47.857 [2024-07-24 19:02:25.426521] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:47.857 [2024-07-24 19:02:25.426533] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.426539] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x125d540): datao=0, datal=4096, cccid=7 00:19:47.857 [2024-07-24 19:02:25.426547] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12bde40) on tqpair(0x125d540): expected_datao=0, payload_size=4096 00:19:47.857 [2024-07-24 19:02:25.426554] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.426563] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.426570] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.426581] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.857 [2024-07-24 19:02:25.426605] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.857 [2024-07-24 19:02:25.426611] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.426618] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bdb40) on tqpair=0x125d540 00:19:47.857 [2024-07-24 19:02:25.426635] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.857 [2024-07-24 19:02:25.426645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.857 [2024-07-24 19:02:25.426651] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.426657] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd9c0) on tqpair=0x125d540 00:19:47.857 [2024-07-24 19:02:25.426671] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.857 [2024-07-24 19:02:25.426681] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.857 [2024-07-24 19:02:25.426687] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.426693] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bdcc0) on tqpair=0x125d540 00:19:47.857 [2024-07-24 19:02:25.426703] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.857 [2024-07-24 19:02:25.426711] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.857 [2024-07-24 19:02:25.426717] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.857 [2024-07-24 19:02:25.426724] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bde40) on tqpair=0x125d540 00:19:47.857 ===================================================== 00:19:47.857 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:47.857 ===================================================== 00:19:47.857 Controller Capabilities/Features 00:19:47.857 ================================ 00:19:47.857 Vendor ID: 8086 00:19:47.857 Subsystem Vendor ID: 8086 00:19:47.857 Serial Number: SPDK00000000000001 00:19:47.857 Model Number: SPDK bdev Controller 00:19:47.857 Firmware Version: 24.09 00:19:47.857 Recommended Arb Burst: 6 00:19:47.857 IEEE OUI Identifier: e4 d2 5c 00:19:47.857 Multi-path I/O 00:19:47.857 May have multiple subsystem ports: Yes 00:19:47.857 May have multiple controllers: Yes 00:19:47.857 Associated with SR-IOV VF: No 00:19:47.857 Max Data Transfer Size: 131072 00:19:47.857 Max Number of Namespaces: 32 00:19:47.857 Max Number of I/O Queues: 127 00:19:47.857 NVMe Specification Version (VS): 1.3 00:19:47.857 NVMe Specification Version (Identify): 1.3 00:19:47.857 Maximum Queue Entries: 128 00:19:47.857 Contiguous Queues Required: Yes 00:19:47.857 Arbitration Mechanisms Supported 00:19:47.857 Weighted Round Robin: Not Supported 00:19:47.857 Vendor Specific: Not Supported 00:19:47.857 Reset Timeout: 15000 ms 00:19:47.857 Doorbell Stride: 4 bytes 00:19:47.857 NVM Subsystem Reset: Not Supported 00:19:47.857 Command Sets Supported 00:19:47.857 NVM Command Set: Supported 00:19:47.857 Boot Partition: Not Supported 00:19:47.858 Memory Page Size Minimum: 4096 bytes 00:19:47.858 Memory Page Size Maximum: 4096 bytes 00:19:47.858 Persistent Memory Region: Not Supported 00:19:47.858 Optional Asynchronous Events Supported 00:19:47.858 Namespace Attribute Notices: Supported 00:19:47.858 Firmware Activation Notices: Not Supported 00:19:47.858 ANA Change Notices: Not Supported 00:19:47.858 PLE Aggregate Log Change Notices: Not Supported 00:19:47.858 LBA Status Info Alert Notices: Not Supported 00:19:47.858 EGE Aggregate Log Change Notices: Not Supported 00:19:47.858 Normal NVM Subsystem Shutdown event: Not Supported 00:19:47.858 Zone Descriptor Change Notices: Not Supported 00:19:47.858 Discovery Log Change Notices: Not Supported 00:19:47.858 Controller Attributes 00:19:47.858 128-bit Host Identifier: Supported 00:19:47.858 Non-Operational Permissive Mode: Not Supported 00:19:47.858 NVM Sets: Not Supported 00:19:47.858 Read Recovery Levels: Not Supported 00:19:47.858 Endurance Groups: Not Supported 00:19:47.858 Predictable Latency Mode: Not Supported 00:19:47.858 Traffic Based Keep ALive: Not Supported 00:19:47.858 Namespace Granularity: Not Supported 00:19:47.858 SQ Associations: Not Supported 00:19:47.858 UUID List: Not Supported 00:19:47.858 Multi-Domain Subsystem: Not Supported 00:19:47.858 Fixed Capacity Management: Not Supported 00:19:47.858 Variable Capacity Management: Not Supported 00:19:47.858 Delete Endurance Group: Not Supported 00:19:47.858 Delete NVM Set: Not Supported 00:19:47.858 Extended LBA Formats Supported: Not Supported 00:19:47.858 Flexible Data Placement Supported: Not Supported 00:19:47.858 00:19:47.858 Controller Memory Buffer Support 00:19:47.858 ================================ 00:19:47.858 Supported: No 00:19:47.858 00:19:47.858 Persistent Memory Region Support 00:19:47.858 ================================ 00:19:47.858 Supported: No 00:19:47.858 00:19:47.858 Admin Command Set Attributes 00:19:47.858 ============================ 00:19:47.858 Security Send/Receive: Not Supported 00:19:47.858 Format NVM: Not Supported 00:19:47.858 Firmware Activate/Download: Not Supported 00:19:47.858 Namespace Management: Not Supported 00:19:47.858 Device Self-Test: Not Supported 00:19:47.858 Directives: Not Supported 00:19:47.858 NVMe-MI: Not Supported 00:19:47.858 Virtualization Management: Not Supported 00:19:47.858 Doorbell Buffer Config: Not Supported 00:19:47.858 Get LBA Status Capability: Not Supported 00:19:47.858 Command & Feature Lockdown Capability: Not Supported 00:19:47.858 Abort Command Limit: 4 00:19:47.858 Async Event Request Limit: 4 00:19:47.858 Number of Firmware Slots: N/A 00:19:47.858 Firmware Slot 1 Read-Only: N/A 00:19:47.858 Firmware Activation Without Reset: N/A 00:19:47.858 Multiple Update Detection Support: N/A 00:19:47.858 Firmware Update Granularity: No Information Provided 00:19:47.858 Per-Namespace SMART Log: No 00:19:47.858 Asymmetric Namespace Access Log Page: Not Supported 00:19:47.858 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:47.858 Command Effects Log Page: Supported 00:19:47.858 Get Log Page Extended Data: Supported 00:19:47.858 Telemetry Log Pages: Not Supported 00:19:47.858 Persistent Event Log Pages: Not Supported 00:19:47.858 Supported Log Pages Log Page: May Support 00:19:47.858 Commands Supported & Effects Log Page: Not Supported 00:19:47.858 Feature Identifiers & Effects Log Page:May Support 00:19:47.858 NVMe-MI Commands & Effects Log Page: May Support 00:19:47.858 Data Area 4 for Telemetry Log: Not Supported 00:19:47.858 Error Log Page Entries Supported: 128 00:19:47.858 Keep Alive: Supported 00:19:47.858 Keep Alive Granularity: 10000 ms 00:19:47.858 00:19:47.858 NVM Command Set Attributes 00:19:47.858 ========================== 00:19:47.858 Submission Queue Entry Size 00:19:47.858 Max: 64 00:19:47.858 Min: 64 00:19:47.858 Completion Queue Entry Size 00:19:47.858 Max: 16 00:19:47.858 Min: 16 00:19:47.858 Number of Namespaces: 32 00:19:47.858 Compare Command: Supported 00:19:47.858 Write Uncorrectable Command: Not Supported 00:19:47.858 Dataset Management Command: Supported 00:19:47.858 Write Zeroes Command: Supported 00:19:47.858 Set Features Save Field: Not Supported 00:19:47.858 Reservations: Supported 00:19:47.858 Timestamp: Not Supported 00:19:47.858 Copy: Supported 00:19:47.858 Volatile Write Cache: Present 00:19:47.858 Atomic Write Unit (Normal): 1 00:19:47.858 Atomic Write Unit (PFail): 1 00:19:47.858 Atomic Compare & Write Unit: 1 00:19:47.858 Fused Compare & Write: Supported 00:19:47.858 Scatter-Gather List 00:19:47.858 SGL Command Set: Supported 00:19:47.858 SGL Keyed: Supported 00:19:47.858 SGL Bit Bucket Descriptor: Not Supported 00:19:47.858 SGL Metadata Pointer: Not Supported 00:19:47.858 Oversized SGL: Not Supported 00:19:47.858 SGL Metadata Address: Not Supported 00:19:47.858 SGL Offset: Supported 00:19:47.858 Transport SGL Data Block: Not Supported 00:19:47.858 Replay Protected Memory Block: Not Supported 00:19:47.858 00:19:47.858 Firmware Slot Information 00:19:47.858 ========================= 00:19:47.858 Active slot: 1 00:19:47.858 Slot 1 Firmware Revision: 24.09 00:19:47.858 00:19:47.858 00:19:47.858 Commands Supported and Effects 00:19:47.858 ============================== 00:19:47.858 Admin Commands 00:19:47.858 -------------- 00:19:47.858 Get Log Page (02h): Supported 00:19:47.858 Identify (06h): Supported 00:19:47.858 Abort (08h): Supported 00:19:47.858 Set Features (09h): Supported 00:19:47.858 Get Features (0Ah): Supported 00:19:47.858 Asynchronous Event Request (0Ch): Supported 00:19:47.858 Keep Alive (18h): Supported 00:19:47.858 I/O Commands 00:19:47.858 ------------ 00:19:47.858 Flush (00h): Supported LBA-Change 00:19:47.858 Write (01h): Supported LBA-Change 00:19:47.858 Read (02h): Supported 00:19:47.858 Compare (05h): Supported 00:19:47.858 Write Zeroes (08h): Supported LBA-Change 00:19:47.858 Dataset Management (09h): Supported LBA-Change 00:19:47.858 Copy (19h): Supported LBA-Change 00:19:47.858 00:19:47.858 Error Log 00:19:47.858 ========= 00:19:47.858 00:19:47.858 Arbitration 00:19:47.858 =========== 00:19:47.858 Arbitration Burst: 1 00:19:47.858 00:19:47.858 Power Management 00:19:47.858 ================ 00:19:47.858 Number of Power States: 1 00:19:47.858 Current Power State: Power State #0 00:19:47.858 Power State #0: 00:19:47.858 Max Power: 0.00 W 00:19:47.858 Non-Operational State: Operational 00:19:47.858 Entry Latency: Not Reported 00:19:47.858 Exit Latency: Not Reported 00:19:47.858 Relative Read Throughput: 0 00:19:47.858 Relative Read Latency: 0 00:19:47.858 Relative Write Throughput: 0 00:19:47.858 Relative Write Latency: 0 00:19:47.858 Idle Power: Not Reported 00:19:47.858 Active Power: Not Reported 00:19:47.858 Non-Operational Permissive Mode: Not Supported 00:19:47.858 00:19:47.858 Health Information 00:19:47.858 ================== 00:19:47.858 Critical Warnings: 00:19:47.858 Available Spare Space: OK 00:19:47.858 Temperature: OK 00:19:47.858 Device Reliability: OK 00:19:47.858 Read Only: No 00:19:47.858 Volatile Memory Backup: OK 00:19:47.858 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:47.858 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:47.858 Available Spare: 0% 00:19:47.858 Available Spare Threshold: 0% 00:19:47.858 Life Percentage Used:[2024-07-24 19:02:25.426830] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.859 [2024-07-24 19:02:25.426841] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x125d540) 00:19:47.859 [2024-07-24 19:02:25.426852] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.859 [2024-07-24 19:02:25.426873] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bde40, cid 7, qid 0 00:19:47.859 [2024-07-24 19:02:25.427045] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.859 [2024-07-24 19:02:25.427060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.859 [2024-07-24 19:02:25.427066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.859 [2024-07-24 19:02:25.427073] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bde40) on tqpair=0x125d540 00:19:47.859 [2024-07-24 19:02:25.427136] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:19:47.859 [2024-07-24 19:02:25.427157] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd3c0) on tqpair=0x125d540 00:19:47.859 [2024-07-24 19:02:25.427168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.859 [2024-07-24 19:02:25.427177] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd540) on tqpair=0x125d540 00:19:47.859 [2024-07-24 19:02:25.427184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.859 [2024-07-24 19:02:25.427208] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd6c0) on tqpair=0x125d540 00:19:47.859 [2024-07-24 19:02:25.427215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.859 [2024-07-24 19:02:25.427227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd840) on tqpair=0x125d540 00:19:47.859 [2024-07-24 19:02:25.427234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.859 [2024-07-24 19:02:25.427247] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.859 [2024-07-24 19:02:25.427255] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.859 [2024-07-24 19:02:25.427261] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x125d540) 00:19:47.859 [2024-07-24 19:02:25.427271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.859 [2024-07-24 19:02:25.427293] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd840, cid 3, qid 0 00:19:47.859 [2024-07-24 19:02:25.427469] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.859 [2024-07-24 19:02:25.427484] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.859 [2024-07-24 19:02:25.427491] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.859 [2024-07-24 19:02:25.427497] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd840) on tqpair=0x125d540 00:19:47.859 [2024-07-24 19:02:25.427508] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.859 [2024-07-24 19:02:25.427516] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.859 [2024-07-24 19:02:25.427522] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x125d540) 00:19:47.859 [2024-07-24 19:02:25.427532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.859 [2024-07-24 19:02:25.427557] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd840, cid 3, qid 0 00:19:47.859 [2024-07-24 19:02:25.427689] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.859 [2024-07-24 19:02:25.427703] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.859 [2024-07-24 19:02:25.427710] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.859 [2024-07-24 19:02:25.427716] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd840) on tqpair=0x125d540 00:19:47.859 [2024-07-24 19:02:25.427724] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:19:47.859 [2024-07-24 19:02:25.427731] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:19:47.859 [2024-07-24 19:02:25.427746] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.859 [2024-07-24 19:02:25.427755] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.859 [2024-07-24 19:02:25.427761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x125d540) 00:19:47.859 [2024-07-24 19:02:25.427771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.859 [2024-07-24 19:02:25.427791] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd840, cid 3, qid 0 00:19:47.859 [2024-07-24 19:02:25.427913] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.859 [2024-07-24 19:02:25.427927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.859 [2024-07-24 19:02:25.427934] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.859 [2024-07-24 19:02:25.427940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd840) on tqpair=0x125d540 00:19:47.859 [2024-07-24 19:02:25.427956] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.859 [2024-07-24 19:02:25.427965] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.859 [2024-07-24 19:02:25.427972] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x125d540) 00:19:47.859 [2024-07-24 19:02:25.427982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.859 [2024-07-24 19:02:25.428005] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd840, cid 3, qid 0 00:19:47.859 [2024-07-24 19:02:25.432113] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.859 [2024-07-24 19:02:25.432130] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.859 [2024-07-24 19:02:25.432137] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.859 [2024-07-24 19:02:25.432144] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd840) on tqpair=0x125d540 00:19:47.859 [2024-07-24 19:02:25.432161] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.859 [2024-07-24 19:02:25.432171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.859 [2024-07-24 19:02:25.432177] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x125d540) 00:19:47.859 [2024-07-24 19:02:25.432188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.859 [2024-07-24 19:02:25.432210] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12bd840, cid 3, qid 0 00:19:47.859 [2024-07-24 19:02:25.432401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.859 [2024-07-24 19:02:25.432413] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.859 [2024-07-24 19:02:25.432419] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.859 [2024-07-24 19:02:25.432426] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12bd840) on tqpair=0x125d540 00:19:47.859 [2024-07-24 19:02:25.432439] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:19:48.117 0% 00:19:48.117 Data Units Read: 0 00:19:48.117 Data Units Written: 0 00:19:48.117 Host Read Commands: 0 00:19:48.117 Host Write Commands: 0 00:19:48.117 Controller Busy Time: 0 minutes 00:19:48.117 Power Cycles: 0 00:19:48.117 Power On Hours: 0 hours 00:19:48.117 Unsafe Shutdowns: 0 00:19:48.117 Unrecoverable Media Errors: 0 00:19:48.117 Lifetime Error Log Entries: 0 00:19:48.117 Warning Temperature Time: 0 minutes 00:19:48.117 Critical Temperature Time: 0 minutes 00:19:48.117 00:19:48.117 Number of Queues 00:19:48.117 ================ 00:19:48.117 Number of I/O Submission Queues: 127 00:19:48.117 Number of I/O Completion Queues: 127 00:19:48.117 00:19:48.117 Active Namespaces 00:19:48.117 ================= 00:19:48.117 Namespace ID:1 00:19:48.117 Error Recovery Timeout: Unlimited 00:19:48.117 Command Set Identifier: NVM (00h) 00:19:48.117 Deallocate: Supported 00:19:48.117 Deallocated/Unwritten Error: Not Supported 00:19:48.117 Deallocated Read Value: Unknown 00:19:48.117 Deallocate in Write Zeroes: Not Supported 00:19:48.117 Deallocated Guard Field: 0xFFFF 00:19:48.117 Flush: Supported 00:19:48.117 Reservation: Supported 00:19:48.117 Namespace Sharing Capabilities: Multiple Controllers 00:19:48.117 Size (in LBAs): 131072 (0GiB) 00:19:48.117 Capacity (in LBAs): 131072 (0GiB) 00:19:48.117 Utilization (in LBAs): 131072 (0GiB) 00:19:48.117 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:48.117 EUI64: ABCDEF0123456789 00:19:48.117 UUID: e7f4d4ad-6660-45f6-8570-c7a1bc397215 00:19:48.117 Thin Provisioning: Not Supported 00:19:48.117 Per-NS Atomic Units: Yes 00:19:48.117 Atomic Boundary Size (Normal): 0 00:19:48.117 Atomic Boundary Size (PFail): 0 00:19:48.117 Atomic Boundary Offset: 0 00:19:48.117 Maximum Single Source Range Length: 65535 00:19:48.117 Maximum Copy Length: 65535 00:19:48.117 Maximum Source Range Count: 1 00:19:48.117 NGUID/EUI64 Never Reused: No 00:19:48.117 Namespace Write Protected: No 00:19:48.117 Number of LBA Formats: 1 00:19:48.117 Current LBA Format: LBA Format #00 00:19:48.117 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:48.117 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:48.117 rmmod nvme_tcp 00:19:48.117 rmmod nvme_fabrics 00:19:48.117 rmmod nvme_keyring 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3200052 ']' 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3200052 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 3200052 ']' 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 3200052 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3200052 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3200052' 00:19:48.117 killing process with pid 3200052 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 3200052 00:19:48.117 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 3200052 00:19:48.375 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:48.375 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:48.375 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:48.375 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:48.375 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:48.375 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.375 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:48.375 19:02:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.905 19:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:50.905 00:19:50.905 real 0m5.336s 00:19:50.905 user 0m4.305s 00:19:50.905 sys 0m1.816s 00:19:50.905 19:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:50.905 19:02:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:50.905 ************************************ 00:19:50.905 END TEST nvmf_identify 00:19:50.905 ************************************ 00:19:50.905 19:02:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:50.905 19:02:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:50.905 19:02:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:50.905 19:02:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.905 ************************************ 00:19:50.905 START TEST nvmf_perf 00:19:50.905 ************************************ 00:19:50.905 19:02:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:50.905 * Looking for test storage... 00:19:50.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:19:50.905 19:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:52.806 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.806 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:19:52.806 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:52.806 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:52.806 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:52.806 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:52.806 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:52.806 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:19:52.806 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:52.806 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:19:52.806 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:19:52.806 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:19:52.806 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:19:52.806 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:19:52.806 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:19:52.806 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.806 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.806 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.806 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:52.807 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:52.807 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:52.807 Found net devices under 0000:09:00.0: cvl_0_0 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:52.807 Found net devices under 0000:09:00.1: cvl_0_1 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:52.807 19:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:52.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:19:52.807 00:19:52.807 --- 10.0.0.2 ping statistics --- 00:19:52.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.807 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:52.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:19:52.807 00:19:52.807 --- 10.0.0.1 ping statistics --- 00:19:52.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.807 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3202136 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3202136 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 3202136 ']' 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:52.807 19:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:52.807 [2024-07-24 19:02:30.199283] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:19:52.808 [2024-07-24 19:02:30.199362] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.808 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.808 [2024-07-24 19:02:30.267031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:52.808 [2024-07-24 19:02:30.386461] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.808 [2024-07-24 19:02:30.386521] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.808 [2024-07-24 19:02:30.386537] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.808 [2024-07-24 19:02:30.386551] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.808 [2024-07-24 19:02:30.386563] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.808 [2024-07-24 19:02:30.386621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.808 [2024-07-24 19:02:30.386688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.808 [2024-07-24 19:02:30.386711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:52.808 [2024-07-24 19:02:30.386714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.738 19:02:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:53.738 19:02:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:19:53.738 19:02:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:53.738 19:02:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:53.738 19:02:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:53.738 19:02:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.738 19:02:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:19:53.738 19:02:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:19:57.014 19:02:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:19:57.014 19:02:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:19:57.014 19:02:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:19:57.014 19:02:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:57.272 19:02:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:19:57.272 19:02:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:19:57.272 19:02:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:19:57.272 19:02:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:19:57.272 19:02:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:57.528 [2024-07-24 19:02:35.120445] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.786 19:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:58.043 19:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:58.043 19:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:58.300 19:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:58.300 19:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:58.300 19:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:58.558 [2024-07-24 19:02:36.103999] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.558 19:02:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:58.814 19:02:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:19:58.814 19:02:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:19:58.814 19:02:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:19:58.815 19:02:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:20:00.184 Initializing NVMe Controllers 00:20:00.185 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:20:00.185 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:20:00.185 Initialization complete. Launching workers. 00:20:00.185 ======================================================== 00:20:00.185 Latency(us) 00:20:00.185 Device Information : IOPS MiB/s Average min max 00:20:00.185 PCIE (0000:0b:00.0) NSID 1 from core 0: 84092.15 328.48 380.10 32.65 4384.91 00:20:00.185 ======================================================== 00:20:00.185 Total : 84092.15 328.48 380.10 32.65 4384.91 00:20:00.185 00:20:00.185 19:02:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:00.185 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.554 Initializing NVMe Controllers 00:20:01.554 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:01.554 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:01.554 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:01.554 Initialization complete. Launching workers. 00:20:01.554 ======================================================== 00:20:01.554 Latency(us) 00:20:01.554 Device Information : IOPS MiB/s Average min max 00:20:01.554 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 107.00 0.42 9445.12 177.72 45730.91 00:20:01.554 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 55.00 0.21 18898.43 7219.57 50870.47 00:20:01.554 ======================================================== 00:20:01.554 Total : 162.00 0.63 12654.58 177.72 50870.47 00:20:01.554 00:20:01.554 19:02:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:01.554 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.925 Initializing NVMe Controllers 00:20:02.925 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:02.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:02.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:02.925 Initialization complete. Launching workers. 00:20:02.925 ======================================================== 00:20:02.925 Latency(us) 00:20:02.925 Device Information : IOPS MiB/s Average min max 00:20:02.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8206.93 32.06 3900.25 425.69 7810.41 00:20:02.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3871.40 15.12 8293.04 5119.24 15948.35 00:20:02.925 ======================================================== 00:20:02.925 Total : 12078.34 47.18 5308.24 425.69 15948.35 00:20:02.925 00:20:02.925 19:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:02.925 19:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:02.925 19:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:02.925 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.451 Initializing NVMe Controllers 00:20:05.451 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:05.451 Controller IO queue size 128, less than required. 00:20:05.451 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:05.451 Controller IO queue size 128, less than required. 00:20:05.451 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:05.451 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:05.451 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:05.451 Initialization complete. Launching workers. 00:20:05.451 ======================================================== 00:20:05.451 Latency(us) 00:20:05.451 Device Information : IOPS MiB/s Average min max 00:20:05.451 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1176.93 294.23 111368.97 70450.65 157013.89 00:20:05.451 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 590.46 147.62 222586.56 90431.39 327455.83 00:20:05.451 ======================================================== 00:20:05.451 Total : 1767.39 441.85 148525.33 70450.65 327455.83 00:20:05.451 00:20:05.451 19:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:05.451 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.451 No valid NVMe controllers or AIO or URING devices found 00:20:05.451 Initializing NVMe Controllers 00:20:05.451 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:05.451 Controller IO queue size 128, less than required. 00:20:05.451 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:05.451 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:05.451 Controller IO queue size 128, less than required. 00:20:05.451 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:05.451 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:05.451 WARNING: Some requested NVMe devices were skipped 00:20:05.451 19:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:05.451 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.974 Initializing NVMe Controllers 00:20:07.974 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:07.974 Controller IO queue size 128, less than required. 00:20:07.974 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:07.974 Controller IO queue size 128, less than required. 00:20:07.974 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:07.974 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:07.974 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:07.974 Initialization complete. Launching workers. 00:20:07.974 00:20:07.974 ==================== 00:20:07.974 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:07.974 TCP transport: 00:20:07.974 polls: 16110 00:20:07.974 idle_polls: 6236 00:20:07.974 sock_completions: 9874 00:20:07.974 nvme_completions: 4659 00:20:07.974 submitted_requests: 6946 00:20:07.974 queued_requests: 1 00:20:07.974 00:20:07.974 ==================== 00:20:07.974 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:07.974 TCP transport: 00:20:07.974 polls: 18095 00:20:07.974 idle_polls: 6966 00:20:07.974 sock_completions: 11129 00:20:07.974 nvme_completions: 3753 00:20:07.974 submitted_requests: 5596 00:20:07.974 queued_requests: 1 00:20:07.974 ======================================================== 00:20:07.974 Latency(us) 00:20:07.974 Device Information : IOPS MiB/s Average min max 00:20:07.974 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1163.72 290.93 112252.43 56999.62 182415.80 00:20:07.974 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 937.37 234.34 140481.98 52802.24 185499.87 00:20:07.974 ======================================================== 00:20:07.974 Total : 2101.10 525.27 124846.63 52802.24 185499.87 00:20:07.974 00:20:07.974 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:07.974 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:08.232 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:08.232 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:08.232 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:08.232 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:08.232 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:20:08.232 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:08.232 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:20:08.232 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:08.232 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:08.232 rmmod nvme_tcp 00:20:08.232 rmmod nvme_fabrics 00:20:08.232 rmmod nvme_keyring 00:20:08.232 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:08.232 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:20:08.232 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:20:08.232 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3202136 ']' 00:20:08.232 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3202136 00:20:08.232 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 3202136 ']' 00:20:08.232 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 3202136 00:20:08.232 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:20:08.232 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:08.232 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3202136 00:20:08.490 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:08.490 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:08.490 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3202136' 00:20:08.490 killing process with pid 3202136 00:20:08.490 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 3202136 00:20:08.490 19:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 3202136 00:20:09.911 19:02:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:09.911 19:02:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:09.911 19:02:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:09.911 19:02:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:09.911 19:02:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:09.911 19:02:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.911 19:02:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:09.911 19:02:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.444 19:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:12.444 00:20:12.444 real 0m21.523s 00:20:12.444 user 1m7.276s 00:20:12.444 sys 0m5.010s 00:20:12.444 19:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:12.444 19:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:12.444 ************************************ 00:20:12.444 END TEST nvmf_perf 00:20:12.444 ************************************ 00:20:12.444 19:02:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:12.444 19:02:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:12.444 19:02:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:12.444 19:02:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.444 ************************************ 00:20:12.444 START TEST nvmf_fio_host 00:20:12.444 ************************************ 00:20:12.444 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:12.444 * Looking for test storage... 00:20:12.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:12.444 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:12.444 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.444 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.444 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:20:12.445 19:02:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.345 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:14.345 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:14.346 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:14.346 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:14.346 Found net devices under 0000:09:00.0: cvl_0_0 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:14.346 Found net devices under 0000:09:00.1: cvl_0_1 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:14.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:20:14.346 00:20:14.346 --- 10.0.0.2 ping statistics --- 00:20:14.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.346 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:14.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:20:14.346 00:20:14.346 --- 10.0.0.1 ping statistics --- 00:20:14.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.346 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.346 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:20:14.347 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:14.347 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.347 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:14.347 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:14.347 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.347 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:14.347 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:14.347 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:14.347 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:14.347 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:14.347 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.347 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3206103 00:20:14.347 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:14.347 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:14.347 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3206103 00:20:14.347 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 3206103 ']' 00:20:14.347 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.347 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:14.347 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.347 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:14.347 19:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.347 [2024-07-24 19:02:51.807434] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:20:14.347 [2024-07-24 19:02:51.807529] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.347 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.347 [2024-07-24 19:02:51.876696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:14.605 [2024-07-24 19:02:51.995403] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.605 [2024-07-24 19:02:51.995477] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.605 [2024-07-24 19:02:51.995493] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.605 [2024-07-24 19:02:51.995507] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.605 [2024-07-24 19:02:51.995527] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.605 [2024-07-24 19:02:51.995608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.605 [2024-07-24 19:02:51.995677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.605 [2024-07-24 19:02:51.995771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.605 [2024-07-24 19:02:51.995773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.169 19:02:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:15.169 19:02:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:20:15.169 19:02:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:15.434 [2024-07-24 19:02:52.964353] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.434 19:02:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:15.434 19:02:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:15.434 19:02:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.434 19:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:15.696 Malloc1 00:20:15.697 19:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:15.954 19:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:16.211 19:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:16.468 [2024-07-24 19:02:53.996288] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.468 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:16.726 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:16.726 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:16.726 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:16.726 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:16.726 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:16.726 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:16.727 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:16.727 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:16.727 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:16.727 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:16.727 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:16.727 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:16.727 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:16.727 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:16.727 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:16.727 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:16.727 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:16.727 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:16.727 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:16.727 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:16.727 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:16.727 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:16.727 19:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:16.984 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:16.984 fio-3.35 00:20:16.984 Starting 1 thread 00:20:16.984 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.506 00:20:19.506 test: (groupid=0, jobs=1): err= 0: pid=3206586: Wed Jul 24 19:02:56 2024 00:20:19.506 read: IOPS=8845, BW=34.6MiB/s (36.2MB/s)(69.3MiB/2006msec) 00:20:19.506 slat (nsec): min=1896, max=113024, avg=2401.73, stdev=1487.13 00:20:19.506 clat (usec): min=3027, max=13911, avg=7997.31, stdev=616.07 00:20:19.506 lat (usec): min=3049, max=13913, avg=7999.71, stdev=615.98 00:20:19.506 clat percentiles (usec): 00:20:19.506 | 1.00th=[ 6587], 5.00th=[ 7046], 10.00th=[ 7242], 20.00th=[ 7504], 00:20:19.506 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8160], 00:20:19.506 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:20:19.506 | 99.00th=[ 9372], 99.50th=[ 9503], 99.90th=[12256], 99.95th=[13173], 00:20:19.506 | 99.99th=[13829] 00:20:19.506 bw ( KiB/s): min=35016, max=35928, per=99.93%, avg=35360.00, stdev=397.59, samples=4 00:20:19.506 iops : min= 8754, max= 8982, avg=8840.00, stdev=99.40, samples=4 00:20:19.506 write: IOPS=8864, BW=34.6MiB/s (36.3MB/s)(69.5MiB/2006msec); 0 zone resets 00:20:19.506 slat (nsec): min=2037, max=96683, avg=2508.23, stdev=1261.23 00:20:19.506 clat (usec): min=993, max=12456, avg=6426.75, stdev=544.25 00:20:19.506 lat (usec): min=999, max=12459, avg=6429.26, stdev=544.23 00:20:19.506 clat percentiles (usec): 00:20:19.506 | 1.00th=[ 5211], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 6063], 00:20:19.506 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6521], 00:20:19.506 | 70.00th=[ 6652], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7242], 00:20:19.506 | 99.00th=[ 7504], 99.50th=[ 7767], 99.90th=[10552], 99.95th=[11600], 00:20:19.506 | 99.99th=[12387] 00:20:19.506 bw ( KiB/s): min=35264, max=35808, per=99.95%, avg=35442.00, stdev=247.73, samples=4 00:20:19.506 iops : min= 8816, max= 8952, avg=8860.50, stdev=61.93, samples=4 00:20:19.506 lat (usec) : 1000=0.01% 00:20:19.506 lat (msec) : 2=0.02%, 4=0.11%, 10=99.67%, 20=0.19% 00:20:19.506 cpu : usr=57.51%, sys=37.46%, ctx=65, majf=0, minf=38 00:20:19.506 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:19.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:19.506 issued rwts: total=17745,17783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:19.506 00:20:19.506 Run status group 0 (all jobs): 00:20:19.506 READ: bw=34.6MiB/s (36.2MB/s), 34.6MiB/s-34.6MiB/s (36.2MB/s-36.2MB/s), io=69.3MiB (72.7MB), run=2006-2006msec 00:20:19.506 WRITE: bw=34.6MiB/s (36.3MB/s), 34.6MiB/s-34.6MiB/s (36.3MB/s-36.3MB/s), io=69.5MiB (72.8MB), run=2006-2006msec 00:20:19.506 19:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:19.506 19:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:19.506 19:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:19.506 19:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:19.506 19:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:19.506 19:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:19.506 19:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:19.506 19:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:19.506 19:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:19.506 19:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:19.506 19:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:19.506 19:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:19.506 19:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:19.506 19:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:19.506 19:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:19.506 19:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:19.506 19:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:19.506 19:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:19.506 19:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:19.506 19:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:19.506 19:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:19.506 19:02:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:19.506 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:19.506 fio-3.35 00:20:19.506 Starting 1 thread 00:20:19.506 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.033 00:20:22.033 test: (groupid=0, jobs=1): err= 0: pid=3206919: Wed Jul 24 19:02:59 2024 00:20:22.033 read: IOPS=8179, BW=128MiB/s (134MB/s)(257MiB/2008msec) 00:20:22.033 slat (usec): min=2, max=100, avg= 3.89, stdev= 2.02 00:20:22.033 clat (usec): min=2898, max=22110, avg=9351.82, stdev=2429.47 00:20:22.033 lat (usec): min=2902, max=22119, avg=9355.71, stdev=2429.85 00:20:22.033 clat percentiles (usec): 00:20:22.033 | 1.00th=[ 4621], 5.00th=[ 5800], 10.00th=[ 6587], 20.00th=[ 7439], 00:20:22.033 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9503], 00:20:22.033 | 70.00th=[10159], 80.00th=[11076], 90.00th=[12518], 95.00th=[13698], 00:20:22.033 | 99.00th=[16581], 99.50th=[18744], 99.90th=[19792], 99.95th=[20055], 00:20:22.033 | 99.99th=[21103] 00:20:22.033 bw ( KiB/s): min=55456, max=72128, per=50.32%, avg=65848.00, stdev=7276.53, samples=4 00:20:22.033 iops : min= 3466, max= 4508, avg=4115.50, stdev=454.78, samples=4 00:20:22.033 write: IOPS=4660, BW=72.8MiB/s (76.4MB/s)(135MiB/1849msec); 0 zone resets 00:20:22.033 slat (usec): min=30, max=354, avg=34.74, stdev=10.48 00:20:22.033 clat (usec): min=6466, max=23431, avg=11379.81, stdev=2341.68 00:20:22.033 lat (usec): min=6498, max=23522, avg=11414.55, stdev=2346.76 00:20:22.033 clat percentiles (usec): 00:20:22.033 | 1.00th=[ 7570], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9503], 00:20:22.033 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10945], 60.00th=[11469], 00:20:22.033 | 70.00th=[12125], 80.00th=[13042], 90.00th=[14222], 95.00th=[15533], 00:20:22.033 | 99.00th=[19792], 99.50th=[21627], 99.90th=[23200], 99.95th=[23200], 00:20:22.033 | 99.99th=[23462] 00:20:22.033 bw ( KiB/s): min=58528, max=74624, per=91.82%, avg=68464.00, stdev=7057.94, samples=4 00:20:22.033 iops : min= 3658, max= 4664, avg=4279.00, stdev=441.12, samples=4 00:20:22.033 lat (msec) : 4=0.16%, 10=54.84%, 20=44.64%, 50=0.35% 00:20:22.033 cpu : usr=73.59%, sys=23.17%, ctx=31, majf=0, minf=60 00:20:22.033 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:20:22.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:22.033 issued rwts: total=16424,8617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.033 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:22.033 00:20:22.033 Run status group 0 (all jobs): 00:20:22.033 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=257MiB (269MB), run=2008-2008msec 00:20:22.033 WRITE: bw=72.8MiB/s (76.4MB/s), 72.8MiB/s-72.8MiB/s (76.4MB/s-76.4MB/s), io=135MiB (141MB), run=1849-1849msec 00:20:22.033 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:22.290 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:22.290 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:22.290 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:22.290 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:22.290 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:22.290 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:20:22.290 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:22.290 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:20:22.290 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:22.290 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:22.290 rmmod nvme_tcp 00:20:22.290 rmmod nvme_fabrics 00:20:22.290 rmmod nvme_keyring 00:20:22.290 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:22.290 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:20:22.290 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:20:22.290 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3206103 ']' 00:20:22.291 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3206103 00:20:22.291 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 3206103 ']' 00:20:22.291 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 3206103 00:20:22.291 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:20:22.291 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:22.291 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3206103 00:20:22.291 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:22.291 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:22.291 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3206103' 00:20:22.291 killing process with pid 3206103 00:20:22.291 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 3206103 00:20:22.291 19:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 3206103 00:20:22.548 19:03:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:22.548 19:03:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:22.548 19:03:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:22.548 19:03:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:22.548 19:03:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:22.548 19:03:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.548 19:03:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.548 19:03:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.078 19:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:25.078 00:20:25.078 real 0m12.540s 00:20:25.078 user 0m36.357s 00:20:25.078 sys 0m4.585s 00:20:25.078 19:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:25.078 19:03:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.078 ************************************ 00:20:25.078 END TEST nvmf_fio_host 00:20:25.078 ************************************ 00:20:25.078 19:03:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:25.078 19:03:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:25.078 19:03:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:25.078 19:03:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.078 ************************************ 00:20:25.078 START TEST nvmf_failover 00:20:25.078 ************************************ 00:20:25.078 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:25.078 * Looking for test storage... 00:20:25.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:25.078 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:25.078 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:25.078 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:25.078 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:25.078 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:25.078 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:20:25.079 19:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:26.453 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.453 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:26.712 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:26.712 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:26.712 Found net devices under 0000:09:00.0: cvl_0_0 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:26.712 Found net devices under 0000:09:00.1: cvl_0_1 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.712 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:26.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:20:26.713 00:20:26.713 --- 10.0.0.2 ping statistics --- 00:20:26.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.713 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:20:26.713 00:20:26.713 --- 10.0.0.1 ping statistics --- 00:20:26.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.713 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3209222 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3209222 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3209222 ']' 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:26.713 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:26.713 [2024-07-24 19:03:04.277368] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:20:26.713 [2024-07-24 19:03:04.277481] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.713 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.971 [2024-07-24 19:03:04.365082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:26.971 [2024-07-24 19:03:04.506991] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.971 [2024-07-24 19:03:04.507056] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.971 [2024-07-24 19:03:04.507096] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.971 [2024-07-24 19:03:04.507129] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.971 [2024-07-24 19:03:04.507156] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.971 [2024-07-24 19:03:04.507257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.971 [2024-07-24 19:03:04.507333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.971 [2024-07-24 19:03:04.507323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:27.229 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:27.229 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:20:27.229 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:27.229 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:27.229 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:27.229 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.229 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:27.486 [2024-07-24 19:03:04.887667] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.486 19:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:27.744 Malloc0 00:20:27.744 19:03:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:28.001 19:03:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:28.257 19:03:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:28.515 [2024-07-24 19:03:05.882953] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.515 19:03:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:28.773 [2024-07-24 19:03:06.127619] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:28.773 19:03:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:29.031 [2024-07-24 19:03:06.376473] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:29.031 19:03:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3209512 00:20:29.031 19:03:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:29.031 19:03:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:29.031 19:03:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3209512 /var/tmp/bdevperf.sock 00:20:29.031 19:03:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3209512 ']' 00:20:29.031 19:03:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:29.031 19:03:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:29.031 19:03:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:29.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:29.031 19:03:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:29.031 19:03:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:29.289 19:03:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:29.289 19:03:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:20:29.289 19:03:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:29.853 NVMe0n1 00:20:29.853 19:03:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:30.111 00:20:30.111 19:03:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3209760 00:20:30.111 19:03:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:30.111 19:03:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:20:31.097 19:03:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:31.355 19:03:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:20:34.636 19:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:34.636 00:20:34.636 19:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:34.893 [2024-07-24 19:03:12.352283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 [2024-07-24 19:03:12.352736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9d10 is same with the state(5) to be set 00:20:34.893 19:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:20:38.174 19:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:38.174 [2024-07-24 19:03:15.605238] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.174 19:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:20:39.106 19:03:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:39.364 [2024-07-24 19:03:16.861569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 [2024-07-24 19:03:16.861627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 [2024-07-24 19:03:16.861641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 [2024-07-24 19:03:16.861653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 [2024-07-24 19:03:16.861664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 [2024-07-24 19:03:16.861676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 [2024-07-24 19:03:16.861687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 [2024-07-24 19:03:16.861699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 [2024-07-24 19:03:16.861710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 [2024-07-24 19:03:16.861722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 [2024-07-24 19:03:16.861733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 [2024-07-24 19:03:16.861744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 [2024-07-24 19:03:16.861756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 [2024-07-24 19:03:16.861767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 [2024-07-24 19:03:16.861778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 [2024-07-24 19:03:16.861789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 [2024-07-24 19:03:16.861800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 [2024-07-24 19:03:16.861812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 [2024-07-24 19:03:16.861848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 [2024-07-24 19:03:16.861859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 [2024-07-24 19:03:16.861870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 [2024-07-24 19:03:16.861881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 [2024-07-24 19:03:16.861891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaaab0 is same with the state(5) to be set 00:20:39.364 19:03:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3209760 00:20:45.936 0 00:20:45.936 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3209512 00:20:45.936 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3209512 ']' 00:20:45.936 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3209512 00:20:45.936 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:20:45.936 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:45.936 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3209512 00:20:45.936 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:45.936 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:45.936 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3209512' 00:20:45.936 killing process with pid 3209512 00:20:45.936 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3209512 00:20:45.936 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3209512 00:20:45.936 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:45.937 [2024-07-24 19:03:06.435933] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:20:45.937 [2024-07-24 19:03:06.436018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209512 ] 00:20:45.937 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.937 [2024-07-24 19:03:06.495747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.937 [2024-07-24 19:03:06.607537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.937 Running I/O for 15 seconds... 00:20:45.937 [2024-07-24 19:03:08.725319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.725401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.725427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.725442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.725458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.725471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.725485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.725498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.725512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.725525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.725539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.725552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.725565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.725578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.725593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.725605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.725619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.725632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.725646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.725659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.725673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.725686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.725709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.725723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.725737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.725750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.725763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.725776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.725790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.725802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.725816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.725828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.725842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.725854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.725868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.725881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.725895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.725908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.725921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.725934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.725948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.725961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.725974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.725987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.726013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.726043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.937 [2024-07-24 19:03:08.726071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.937 [2024-07-24 19:03:08.726124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.937 [2024-07-24 19:03:08.726152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.937 [2024-07-24 19:03:08.726179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.937 [2024-07-24 19:03:08.726206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.937 [2024-07-24 19:03:08.726233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.937 [2024-07-24 19:03:08.726261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.726288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.726315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.726342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.726369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.726419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.726451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.726477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.726503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.726532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.726559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.726586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.726612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.726638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.726665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.726691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.726718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.726745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.726774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.726804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.937 [2024-07-24 19:03:08.726818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.937 [2024-07-24 19:03:08.726831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.726845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.726857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.726872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.726885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.726899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.726912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.726926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.726939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.726953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.726965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.726979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.726992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.727018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.727044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.727072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.727098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.727151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.727179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.727207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.727234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.727263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.727291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.727319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.727347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.727384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.727427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.727455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.727497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.727526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.727559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.727588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.727616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.727644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.727671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.727699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.727727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.727755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.727783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.727825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.727852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.727879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.727909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.727936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.727964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.727978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.727991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.728005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.728018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.728032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.728044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.728058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.728071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.728116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.938 [2024-07-24 19:03:08.728132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.728147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.728160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.728174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.728188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.728203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.728216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.728230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.728244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.728259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.728272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.728291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.728305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.728319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.728333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.728347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.728361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.728375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.728388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.728403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.728431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.728446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.728459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.728474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.728487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.728502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.728515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.728529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.728543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.728557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.728570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.728584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.728597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.728611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.728624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.938 [2024-07-24 19:03:08.728638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.938 [2024-07-24 19:03:08.728651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:08.728669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:08.728683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:08.728697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:08.728711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:08.728725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:08.728738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:08.728752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:08.728766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:08.728781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:08.728795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:08.728810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:08.728823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:08.728837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:08.728850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:08.728864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:08.728877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:08.728891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:08.728904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:08.728919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:08.728932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:08.728947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:08.728960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:08.728975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:08.728988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:08.729002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:08.729021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:08.729037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:08.729050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:08.729064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:08.729078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:08.729140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.939 [2024-07-24 19:03:08.729158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.939 [2024-07-24 19:03:08.729171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77496 len:8 PRP1 0x0 PRP2 0x0 00:20:45.939 [2024-07-24 19:03:08.729184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:08.729242] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f3dc10 was disconnected and freed. reset controller. 00:20:45.939 [2024-07-24 19:03:08.729261] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:20:45.939 [2024-07-24 19:03:08.729295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.939 [2024-07-24 19:03:08.729380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:08.729403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.939 [2024-07-24 19:03:08.729417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:08.729431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.939 [2024-07-24 19:03:08.729452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:08.729466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.939 [2024-07-24 19:03:08.729479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:08.729493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:45.939 [2024-07-24 19:03:08.729539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f200f0 (9): Bad file descriptor 00:20:45.939 [2024-07-24 19:03:08.732833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:45.939 [2024-07-24 19:03:08.807436] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:45.939 [2024-07-24 19:03:12.353003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.939 [2024-07-24 19:03:12.353046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.939 [2024-07-24 19:03:12.353079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.939 [2024-07-24 19:03:12.353122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.939 [2024-07-24 19:03:12.353150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f200f0 is same with the state(5) to be set 00:20:45.939 [2024-07-24 19:03:12.353230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.353974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.353987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.354000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.354014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.354027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.354042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.354055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.354069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.354096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.354120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.354134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.354149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.354162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.354176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.354189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.354203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.354217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.354231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.354244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.354259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.354276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.354291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.939 [2024-07-24 19:03:12.354305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.939 [2024-07-24 19:03:12.354319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.354977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.354992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.355957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.355972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.940 [2024-07-24 19:03:12.355986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.356001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.356014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.356029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.356042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.356057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.356070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.356085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.356108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.356126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.356139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.356154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.356167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.356182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.356195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.356209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.356223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.356239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.356252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.356267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.356280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.356295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.356323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.356339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.356352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.356367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.356381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.356396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.356410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.356425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.356439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.356454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.940 [2024-07-24 19:03:12.356468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.940 [2024-07-24 19:03:12.356487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.940 [2024-07-24 19:03:12.356502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:12.356517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:12.356531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:12.356546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:12.356560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:12.356576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:12.356589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:12.356604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:12.356618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:12.356633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:12.356661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:12.356676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:12.356690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:12.356705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:12.356719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:12.356733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:12.356746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:12.356761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:12.356775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:12.356789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:12.356803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:12.356818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:12.356831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:12.356846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:12.356863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:12.356878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:12.356892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:12.356907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:12.356921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:12.356935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:12.356948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:12.356975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.941 [2024-07-24 19:03:12.356990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.941 [2024-07-24 19:03:12.357002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81792 len:8 PRP1 0x0 PRP2 0x0 00:20:45.941 [2024-07-24 19:03:12.357014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:12.357076] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f4ed40 was disconnected and freed. reset controller. 00:20:45.941 [2024-07-24 19:03:12.357093] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:20:45.941 [2024-07-24 19:03:12.357130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:45.941 [2024-07-24 19:03:12.360433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:45.941 [2024-07-24 19:03:12.360471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f200f0 (9): Bad file descriptor 00:20:45.941 [2024-07-24 19:03:12.436774] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:45.941 [2024-07-24 19:03:16.862301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.941 [2024-07-24 19:03:16.862340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.941 [2024-07-24 19:03:16.862394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.941 [2024-07-24 19:03:16.862424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.941 [2024-07-24 19:03:16.862451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.941 [2024-07-24 19:03:16.862478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.941 [2024-07-24 19:03:16.862511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.941 [2024-07-24 19:03:16.862538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.862565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.862592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.862619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.862646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.862673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.862700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.862726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.862753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.862779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.862806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.862836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.862864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.862891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.862918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.862945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.862971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.862985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.862997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.941 [2024-07-24 19:03:16.863682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.941 [2024-07-24 19:03:16.863709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.941 [2024-07-24 19:03:16.863737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.941 [2024-07-24 19:03:16.863763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.941 [2024-07-24 19:03:16.863791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.941 [2024-07-24 19:03:16.863818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.941 [2024-07-24 19:03:16.863845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.941 [2024-07-24 19:03:16.863872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.941 [2024-07-24 19:03:16.863912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.941 [2024-07-24 19:03:16.863928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.863944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.863957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.863971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.863983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.863997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.864960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.864974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.865021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.865050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.942 [2024-07-24 19:03:16.865078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.942 [2024-07-24 19:03:16.865128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.942 [2024-07-24 19:03:16.865159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.942 [2024-07-24 19:03:16.865188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.942 [2024-07-24 19:03:16.865216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.942 [2024-07-24 19:03:16.865243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.942 [2024-07-24 19:03:16.865272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.942 [2024-07-24 19:03:16.865300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.942 [2024-07-24 19:03:16.865327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.942 [2024-07-24 19:03:16.865355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.942 [2024-07-24 19:03:16.865383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.942 [2024-07-24 19:03:16.865429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.942 [2024-07-24 19:03:16.865458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.942 [2024-07-24 19:03:16.865485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.942 [2024-07-24 19:03:16.865512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.942 [2024-07-24 19:03:16.865539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.865567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.865594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.865621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.865649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.865676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.865703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.865731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.865761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.865798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.865826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.865853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.865880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.865907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.942 [2024-07-24 19:03:16.865934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.942 [2024-07-24 19:03:16.865949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.943 [2024-07-24 19:03:16.865962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.943 [2024-07-24 19:03:16.865976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.943 [2024-07-24 19:03:16.865989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.943 [2024-07-24 19:03:16.866017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.943 [2024-07-24 19:03:16.866032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.943 [2024-07-24 19:03:16.866043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19448 len:8 PRP1 0x0 PRP2 0x0 00:20:45.943 [2024-07-24 19:03:16.866056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.943 [2024-07-24 19:03:16.866136] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f50b40 was disconnected and freed. reset controller. 00:20:45.943 [2024-07-24 19:03:16.866156] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:20:45.943 [2024-07-24 19:03:16.866190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.943 [2024-07-24 19:03:16.866208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.943 [2024-07-24 19:03:16.866223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.943 [2024-07-24 19:03:16.866236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.943 [2024-07-24 19:03:16.866254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.943 [2024-07-24 19:03:16.866268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.943 [2024-07-24 19:03:16.866281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.943 [2024-07-24 19:03:16.866294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.943 [2024-07-24 19:03:16.866307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:45.943 [2024-07-24 19:03:16.866343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f200f0 (9): Bad file descriptor 00:20:45.943 [2024-07-24 19:03:16.869606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:45.943 [2024-07-24 19:03:16.946025] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:45.943 00:20:45.943 Latency(us) 00:20:45.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.943 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:45.943 Verification LBA range: start 0x0 length 0x4000 00:20:45.943 NVMe0n1 : 15.01 8492.82 33.18 580.81 0.00 14079.55 813.13 16602.45 00:20:45.943 =================================================================================================================== 00:20:45.943 Total : 8492.82 33.18 580.81 0.00 14079.55 813.13 16602.45 00:20:45.943 Received shutdown signal, test time was about 15.000000 seconds 00:20:45.943 00:20:45.943 Latency(us) 00:20:45.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.943 =================================================================================================================== 00:20:45.943 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:45.943 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:45.943 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:20:45.943 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:20:45.943 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3212000 00:20:45.943 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:20:45.943 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3212000 /var/tmp/bdevperf.sock 00:20:45.943 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3212000 ']' 00:20:45.943 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.943 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:45.943 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.943 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:45.943 19:03:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:45.943 19:03:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:45.943 19:03:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:20:45.943 19:03:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:46.199 [2024-07-24 19:03:23.542939] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:46.199 19:03:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:46.456 [2024-07-24 19:03:23.831740] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:46.456 19:03:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:47.020 NVMe0n1 00:20:47.020 19:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:47.277 00:20:47.277 19:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:47.534 00:20:47.790 19:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:47.790 19:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:20:48.046 19:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:48.046 19:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:20:51.321 19:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:51.321 19:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:20:51.321 19:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3212670 00:20:51.321 19:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:51.321 19:03:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3212670 00:20:52.693 0 00:20:52.693 19:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:52.693 [2024-07-24 19:03:22.991654] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:20:52.693 [2024-07-24 19:03:22.991739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3212000 ] 00:20:52.693 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.693 [2024-07-24 19:03:23.049401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.693 [2024-07-24 19:03:23.154702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.693 [2024-07-24 19:03:25.615723] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:20:52.693 [2024-07-24 19:03:25.615808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.693 [2024-07-24 19:03:25.615831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.693 [2024-07-24 19:03:25.615848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.693 [2024-07-24 19:03:25.615861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.693 [2024-07-24 19:03:25.615875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.693 [2024-07-24 19:03:25.615888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.693 [2024-07-24 19:03:25.615902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.693 [2024-07-24 19:03:25.615916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.693 [2024-07-24 19:03:25.615929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:52.693 [2024-07-24 19:03:25.615972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:52.693 [2024-07-24 19:03:25.616004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c50f0 (9): Bad file descriptor 00:20:52.693 [2024-07-24 19:03:25.665858] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:52.693 Running I/O for 1 seconds... 00:20:52.693 00:20:52.693 Latency(us) 00:20:52.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.693 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:52.693 Verification LBA range: start 0x0 length 0x4000 00:20:52.693 NVMe0n1 : 1.01 8606.81 33.62 0.00 0.00 14801.18 2257.35 11942.12 00:20:52.693 =================================================================================================================== 00:20:52.693 Total : 8606.81 33.62 0.00 0.00 14801.18 2257.35 11942.12 00:20:52.693 19:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:52.693 19:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:20:52.950 19:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:53.208 19:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:53.208 19:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:20:53.466 19:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:53.723 19:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:20:57.002 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:57.002 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:20:57.002 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3212000 00:20:57.002 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3212000 ']' 00:20:57.002 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3212000 00:20:57.002 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:20:57.002 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:57.002 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3212000 00:20:57.002 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:57.002 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:57.002 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3212000' 00:20:57.002 killing process with pid 3212000 00:20:57.002 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3212000 00:20:57.002 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3212000 00:20:57.295 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:20:57.295 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:57.552 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:20:57.552 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:57.552 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:20:57.552 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:57.552 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:20:57.552 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:57.552 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:20:57.552 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:57.552 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:57.552 rmmod nvme_tcp 00:20:57.552 rmmod nvme_fabrics 00:20:57.552 rmmod nvme_keyring 00:20:57.552 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:57.552 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:20:57.552 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:20:57.552 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3209222 ']' 00:20:57.552 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3209222 00:20:57.552 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3209222 ']' 00:20:57.552 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3209222 00:20:57.552 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:20:57.552 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:57.552 19:03:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3209222 00:20:57.552 19:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:57.552 19:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:57.552 19:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3209222' 00:20:57.552 killing process with pid 3209222 00:20:57.552 19:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3209222 00:20:57.552 19:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3209222 00:20:57.811 19:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:57.811 19:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:57.811 19:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:57.811 19:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:57.811 19:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:57.811 19:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.811 19:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.811 19:03:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:00.343 00:21:00.343 real 0m35.228s 00:21:00.343 user 2m3.527s 00:21:00.343 sys 0m6.119s 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:00.343 ************************************ 00:21:00.343 END TEST nvmf_failover 00:21:00.343 ************************************ 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.343 ************************************ 00:21:00.343 START TEST nvmf_host_discovery 00:21:00.343 ************************************ 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:00.343 * Looking for test storage... 00:21:00.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:00.343 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:00.344 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:00.344 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:00.344 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.344 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:00.344 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:00.344 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:00.344 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.344 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.344 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.344 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:00.344 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:00.344 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:00.344 19:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:02.242 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.242 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:02.243 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:02.243 Found net devices under 0000:09:00.0: cvl_0_0 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:02.243 Found net devices under 0000:09:00.1: cvl_0_1 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:02.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:21:02.243 00:21:02.243 --- 10.0.0.2 ping statistics --- 00:21:02.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.243 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:21:02.243 00:21:02.243 --- 10.0.0.1 ping statistics --- 00:21:02.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.243 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:02.243 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:02.244 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.244 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:02.244 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:02.244 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:02.244 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:02.244 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:02.244 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.244 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3215270 00:21:02.244 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:02.244 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3215270 00:21:02.244 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3215270 ']' 00:21:02.244 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.244 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:02.244 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.244 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:02.244 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.244 [2024-07-24 19:03:39.577984] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:21:02.244 [2024-07-24 19:03:39.578076] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.244 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.244 [2024-07-24 19:03:39.643137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.244 [2024-07-24 19:03:39.752905] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.244 [2024-07-24 19:03:39.752956] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.244 [2024-07-24 19:03:39.752971] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.244 [2024-07-24 19:03:39.752983] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.244 [2024-07-24 19:03:39.752994] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.244 [2024-07-24 19:03:39.753023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.503 [2024-07-24 19:03:39.896701] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.503 [2024-07-24 19:03:39.904869] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.503 null0 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.503 null1 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3215414 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3215414 /tmp/host.sock 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3215414 ']' 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:02.503 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:02.503 19:03:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.503 [2024-07-24 19:03:39.977777] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:21:02.503 [2024-07-24 19:03:39.977856] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3215414 ] 00:21:02.503 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.503 [2024-07-24 19:03:40.042540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.761 [2024-07-24 19:03:40.160742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:02.762 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.020 [2024-07-24 19:03:40.554661] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:03.020 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.278 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:03.278 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:03.278 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:03.278 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:03.278 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:03.278 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:03.278 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:03.278 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:03.278 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:03.278 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:03.278 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:03.278 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.278 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:21:03.279 19:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:21:03.844 [2024-07-24 19:03:41.344967] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:03.844 [2024-07-24 19:03:41.344995] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:03.844 [2024-07-24 19:03:41.345019] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:04.101 [2024-07-24 19:03:41.473469] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:04.101 [2024-07-24 19:03:41.657728] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:04.101 [2024-07-24 19:03:41.657756] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:04.360 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:04.361 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.620 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:04.620 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:04.620 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:04.620 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:04.620 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:04.620 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:04.620 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:04.620 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:04.620 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:04.620 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:04.620 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:04.620 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:04.620 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.620 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.620 19:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.620 [2024-07-24 19:03:42.015044] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:04.620 [2024-07-24 19:03:42.015686] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:04.620 [2024-07-24 19:03:42.015736] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.620 [2024-07-24 19:03:42.141555] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:04.620 19:03:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:21:04.878 [2024-07-24 19:03:42.441982] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:04.879 [2024-07-24 19:03:42.442011] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:04.879 [2024-07-24 19:03:42.442020] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:05.812 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:05.813 [2024-07-24 19:03:43.235502] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:05.813 [2024-07-24 19:03:43.235538] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:05.813 [2024-07-24 19:03:43.238668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.813 [2024-07-24 19:03:43.238699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.813 [2024-07-24 19:03:43.238780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.813 [2024-07-24 19:03:43.238797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.813 [2024-07-24 19:03:43.238811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.813 [2024-07-24 19:03:43.238824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.813 [2024-07-24 19:03:43.238838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.813 [2024-07-24 19:03:43.238852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.813 [2024-07-24 19:03:43.238865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0c20 is same with the state(5) to be set 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:05.813 [2024-07-24 19:03:43.248674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f0c20 (9): Bad file descriptor 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.813 [2024-07-24 19:03:43.258719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:05.813 [2024-07-24 19:03:43.258947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:05.813 [2024-07-24 19:03:43.258978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c20 with addr=10.0.0.2, port=4420 00:21:05.813 [2024-07-24 19:03:43.259011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0c20 is same with the state(5) to be set 00:21:05.813 [2024-07-24 19:03:43.259036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f0c20 (9): Bad file descriptor 00:21:05.813 [2024-07-24 19:03:43.259058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:05.813 [2024-07-24 19:03:43.259072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:05.813 [2024-07-24 19:03:43.259087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:05.813 [2024-07-24 19:03:43.259117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:05.813 [2024-07-24 19:03:43.268796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:05.813 [2024-07-24 19:03:43.269007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:05.813 [2024-07-24 19:03:43.269034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c20 with addr=10.0.0.2, port=4420 00:21:05.813 [2024-07-24 19:03:43.269056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0c20 is same with the state(5) to be set 00:21:05.813 [2024-07-24 19:03:43.269077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f0c20 (9): Bad file descriptor 00:21:05.813 [2024-07-24 19:03:43.269097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:05.813 [2024-07-24 19:03:43.269122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:05.813 [2024-07-24 19:03:43.269135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:05.813 [2024-07-24 19:03:43.269154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:05.813 [2024-07-24 19:03:43.278883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:05.813 [2024-07-24 19:03:43.279112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:05.813 [2024-07-24 19:03:43.279169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c20 with addr=10.0.0.2, port=4420 00:21:05.813 [2024-07-24 19:03:43.279186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0c20 is same with the state(5) to be set 00:21:05.813 [2024-07-24 19:03:43.279208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f0c20 (9): Bad file descriptor 00:21:05.813 [2024-07-24 19:03:43.279228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:05.813 [2024-07-24 19:03:43.279242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:05.813 [2024-07-24 19:03:43.279255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:05.813 [2024-07-24 19:03:43.279274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:05.813 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:05.813 [2024-07-24 19:03:43.288959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:05.813 [2024-07-24 19:03:43.289190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:05.813 [2024-07-24 19:03:43.289218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c20 with addr=10.0.0.2, port=4420 00:21:05.813 [2024-07-24 19:03:43.289234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0c20 is same with the state(5) to be set 00:21:05.813 [2024-07-24 19:03:43.289256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f0c20 (9): Bad file descriptor 00:21:05.813 [2024-07-24 19:03:43.290175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:05.813 [2024-07-24 19:03:43.290199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:05.813 [2024-07-24 19:03:43.290214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:05.813 [2024-07-24 19:03:43.290246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:05.813 [2024-07-24 19:03:43.299035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:05.813 [2024-07-24 19:03:43.299266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:05.813 [2024-07-24 19:03:43.299294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c20 with addr=10.0.0.2, port=4420 00:21:05.813 [2024-07-24 19:03:43.299310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0c20 is same with the state(5) to be set 00:21:05.813 [2024-07-24 19:03:43.299333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f0c20 (9): Bad file descriptor 00:21:05.813 [2024-07-24 19:03:43.299365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:05.813 [2024-07-24 19:03:43.299381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:05.813 [2024-07-24 19:03:43.299405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:05.813 [2024-07-24 19:03:43.299424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:05.813 [2024-07-24 19:03:43.309122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:05.813 [2024-07-24 19:03:43.309284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:05.813 [2024-07-24 19:03:43.309310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c20 with addr=10.0.0.2, port=4420 00:21:05.813 [2024-07-24 19:03:43.309326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0c20 is same with the state(5) to be set 00:21:05.813 [2024-07-24 19:03:43.309348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f0c20 (9): Bad file descriptor 00:21:05.814 [2024-07-24 19:03:43.309380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:05.814 [2024-07-24 19:03:43.309396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:05.814 [2024-07-24 19:03:43.309415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:05.814 [2024-07-24 19:03:43.309447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:05.814 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.814 [2024-07-24 19:03:43.319194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:05.814 [2024-07-24 19:03:43.319385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:05.814 [2024-07-24 19:03:43.319412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c20 with addr=10.0.0.2, port=4420 00:21:05.814 [2024-07-24 19:03:43.319428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0c20 is same with the state(5) to be set 00:21:05.814 [2024-07-24 19:03:43.319450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f0c20 (9): Bad file descriptor 00:21:05.814 [2024-07-24 19:03:43.319509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:05.814 [2024-07-24 19:03:43.319529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:05.814 [2024-07-24 19:03:43.319543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:05.814 [2024-07-24 19:03:43.319562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:05.814 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:05.814 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:05.814 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:05.814 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:05.814 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:05.814 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:05.814 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:05.814 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:05.814 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:05.814 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.814 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:05.814 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:05.814 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:05.814 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:05.814 [2024-07-24 19:03:43.329281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:05.814 [2024-07-24 19:03:43.329491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:05.814 [2024-07-24 19:03:43.329519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c20 with addr=10.0.0.2, port=4420 00:21:05.814 [2024-07-24 19:03:43.329535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0c20 is same with the state(5) to be set 00:21:05.814 [2024-07-24 19:03:43.329556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f0c20 (9): Bad file descriptor 00:21:05.814 [2024-07-24 19:03:43.329761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:05.814 [2024-07-24 19:03:43.329782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:05.814 [2024-07-24 19:03:43.329801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:05.814 [2024-07-24 19:03:43.329835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:05.814 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.814 [2024-07-24 19:03:43.339355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:05.814 [2024-07-24 19:03:43.339600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:05.814 [2024-07-24 19:03:43.339628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c20 with addr=10.0.0.2, port=4420 00:21:05.814 [2024-07-24 19:03:43.339643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0c20 is same with the state(5) to be set 00:21:05.814 [2024-07-24 19:03:43.339665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f0c20 (9): Bad file descriptor 00:21:05.814 [2024-07-24 19:03:43.339697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:05.814 [2024-07-24 19:03:43.339713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:05.814 [2024-07-24 19:03:43.339727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:05.814 [2024-07-24 19:03:43.339746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:05.814 [2024-07-24 19:03:43.349442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:05.814 [2024-07-24 19:03:43.349678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:05.814 [2024-07-24 19:03:43.349703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c20 with addr=10.0.0.2, port=4420 00:21:05.814 [2024-07-24 19:03:43.349718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0c20 is same with the state(5) to be set 00:21:05.814 [2024-07-24 19:03:43.349739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f0c20 (9): Bad file descriptor 00:21:05.814 [2024-07-24 19:03:43.349796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:05.814 [2024-07-24 19:03:43.349814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:05.814 [2024-07-24 19:03:43.349828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:05.814 [2024-07-24 19:03:43.349861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:05.814 [2024-07-24 19:03:43.359528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:05.814 [2024-07-24 19:03:43.359751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:05.814 [2024-07-24 19:03:43.359782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c20 with addr=10.0.0.2, port=4420 00:21:05.814 [2024-07-24 19:03:43.359800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0c20 is same with the state(5) to be set 00:21:05.814 [2024-07-24 19:03:43.359825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f0c20 (9): Bad file descriptor 00:21:05.814 [2024-07-24 19:03:43.359860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:05.814 [2024-07-24 19:03:43.359879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:05.814 [2024-07-24 19:03:43.359893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:05.814 [2024-07-24 19:03:43.359915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:05.814 [2024-07-24 19:03:43.361859] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:05.814 [2024-07-24 19:03:43.361890] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:05.814 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:21:05.814 19:03:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:07.192 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.193 19:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.127 [2024-07-24 19:03:45.656317] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:08.127 [2024-07-24 19:03:45.656353] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:08.127 [2024-07-24 19:03:45.656374] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:08.384 [2024-07-24 19:03:45.742694] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:08.384 [2024-07-24 19:03:45.810843] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:08.384 [2024-07-24 19:03:45.810886] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:08.384 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.384 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.385 request: 00:21:08.385 { 00:21:08.385 "name": "nvme", 00:21:08.385 "trtype": "tcp", 00:21:08.385 "traddr": "10.0.0.2", 00:21:08.385 "adrfam": "ipv4", 00:21:08.385 "trsvcid": "8009", 00:21:08.385 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:08.385 "wait_for_attach": true, 00:21:08.385 "method": "bdev_nvme_start_discovery", 00:21:08.385 "req_id": 1 00:21:08.385 } 00:21:08.385 Got JSON-RPC error response 00:21:08.385 response: 00:21:08.385 { 00:21:08.385 "code": -17, 00:21:08.385 "message": "File exists" 00:21:08.385 } 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.385 request: 00:21:08.385 { 00:21:08.385 "name": "nvme_second", 00:21:08.385 "trtype": "tcp", 00:21:08.385 "traddr": "10.0.0.2", 00:21:08.385 "adrfam": "ipv4", 00:21:08.385 "trsvcid": "8009", 00:21:08.385 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:08.385 "wait_for_attach": true, 00:21:08.385 "method": "bdev_nvme_start_discovery", 00:21:08.385 "req_id": 1 00:21:08.385 } 00:21:08.385 Got JSON-RPC error response 00:21:08.385 response: 00:21:08.385 { 00:21:08.385 "code": -17, 00:21:08.385 "message": "File exists" 00:21:08.385 } 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:08.385 19:03:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.643 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:08.643 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:08.643 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:08.643 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:08.643 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:08.643 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.643 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:08.643 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.643 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:08.643 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.643 19:03:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:09.575 [2024-07-24 19:03:47.011151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:09.575 [2024-07-24 19:03:47.011196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b70 with addr=10.0.0.2, port=8010 00:21:09.575 [2024-07-24 19:03:47.011225] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:09.575 [2024-07-24 19:03:47.011245] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:09.575 [2024-07-24 19:03:47.011259] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:10.507 [2024-07-24 19:03:48.013601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.507 [2024-07-24 19:03:48.013671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3b70 with addr=10.0.0.2, port=8010 00:21:10.507 [2024-07-24 19:03:48.013706] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:10.507 [2024-07-24 19:03:48.013723] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:10.507 [2024-07-24 19:03:48.013737] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:11.440 [2024-07-24 19:03:49.015781] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:11.440 request: 00:21:11.440 { 00:21:11.440 "name": "nvme_second", 00:21:11.440 "trtype": "tcp", 00:21:11.440 "traddr": "10.0.0.2", 00:21:11.440 "adrfam": "ipv4", 00:21:11.440 "trsvcid": "8010", 00:21:11.440 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:11.440 "wait_for_attach": false, 00:21:11.440 "attach_timeout_ms": 3000, 00:21:11.440 "method": "bdev_nvme_start_discovery", 00:21:11.440 "req_id": 1 00:21:11.440 } 00:21:11.440 Got JSON-RPC error response 00:21:11.440 response: 00:21:11.440 { 00:21:11.440 "code": -110, 00:21:11.440 "message": "Connection timed out" 00:21:11.440 } 00:21:11.440 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:11.440 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:11.440 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:11.440 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:11.440 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:11.440 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:11.440 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:11.440 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:11.440 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.440 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.440 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:11.440 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:11.440 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3215414 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:11.698 rmmod nvme_tcp 00:21:11.698 rmmod nvme_fabrics 00:21:11.698 rmmod nvme_keyring 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3215270 ']' 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3215270 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 3215270 ']' 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 3215270 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3215270 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3215270' 00:21:11.698 killing process with pid 3215270 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 3215270 00:21:11.698 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 3215270 00:21:11.956 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:11.956 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:11.956 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:11.956 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:11.956 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:11.956 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.956 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.956 19:03:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.486 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:14.486 00:21:14.486 real 0m14.068s 00:21:14.486 user 0m20.868s 00:21:14.486 sys 0m2.870s 00:21:14.486 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:14.486 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:14.486 ************************************ 00:21:14.487 END TEST nvmf_host_discovery 00:21:14.487 ************************************ 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.487 ************************************ 00:21:14.487 START TEST nvmf_host_multipath_status 00:21:14.487 ************************************ 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:14.487 * Looking for test storage... 00:21:14.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:21:14.487 19:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:16.388 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:16.389 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:16.389 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:16.389 Found net devices under 0000:09:00.0: cvl_0_0 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:16.389 Found net devices under 0000:09:00.1: cvl_0_1 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:16.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:21:16.389 00:21:16.389 --- 10.0.0.2 ping statistics --- 00:21:16.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.389 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:16.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:21:16.389 00:21:16.389 --- 10.0.0.1 ping statistics --- 00:21:16.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.389 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:21:16.389 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:16.390 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.390 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:16.390 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:16.390 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.390 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:16.390 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:16.390 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:16.390 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:16.390 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:16.390 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:16.390 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3218575 00:21:16.390 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:16.390 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3218575 00:21:16.390 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3218575 ']' 00:21:16.390 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.390 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:16.390 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.390 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:16.390 19:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:16.390 [2024-07-24 19:03:53.681644] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:21:16.390 [2024-07-24 19:03:53.681735] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.390 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.390 [2024-07-24 19:03:53.749196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:16.390 [2024-07-24 19:03:53.865859] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.390 [2024-07-24 19:03:53.865922] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.390 [2024-07-24 19:03:53.865939] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.390 [2024-07-24 19:03:53.865952] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.390 [2024-07-24 19:03:53.865963] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.390 [2024-07-24 19:03:53.866046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.390 [2024-07-24 19:03:53.866053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.323 19:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:17.323 19:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:21:17.323 19:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:17.323 19:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:17.323 19:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:17.323 19:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.323 19:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3218575 00:21:17.323 19:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:17.581 [2024-07-24 19:03:54.958866] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.581 19:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:17.857 Malloc0 00:21:17.858 19:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:18.121 19:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:18.378 19:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:18.635 [2024-07-24 19:03:56.138741] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.635 19:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:18.893 [2024-07-24 19:03:56.395550] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:18.893 19:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3218879 00:21:18.893 19:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:18.893 19:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:18.893 19:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3218879 /var/tmp/bdevperf.sock 00:21:18.893 19:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3218879 ']' 00:21:18.893 19:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.893 19:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:18.893 19:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.893 19:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:18.893 19:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:19.151 19:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:19.151 19:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:21:19.151 19:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:19.409 19:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:19.974 Nvme0n1 00:21:19.974 19:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:20.231 Nvme0n1 00:21:20.231 19:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:21:20.231 19:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:22.758 19:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:22.758 19:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:22.758 19:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:23.015 19:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:21:23.948 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:23.948 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:23.948 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:23.948 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:24.206 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:24.206 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:24.206 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:24.206 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:24.463 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:24.463 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:24.463 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:24.463 19:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:24.721 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:24.721 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:24.721 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:24.721 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:24.978 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:24.978 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:24.978 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:24.978 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:25.236 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:25.236 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:25.236 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.236 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:25.492 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:25.492 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:21:25.492 19:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:25.750 19:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:26.007 19:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:21:26.939 19:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:21:26.939 19:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:26.939 19:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:26.939 19:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:27.197 19:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:27.197 19:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:27.197 19:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.197 19:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:27.455 19:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:27.455 19:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:27.455 19:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.455 19:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:27.713 19:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:27.713 19:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:27.713 19:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.713 19:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:27.971 19:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:27.971 19:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:27.971 19:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.971 19:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:28.229 19:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:28.229 19:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:28.229 19:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:28.229 19:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:28.486 19:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:28.486 19:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:21:28.486 19:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:28.744 19:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:21:29.002 19:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:21:29.937 19:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:21:29.937 19:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:29.937 19:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.937 19:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:30.195 19:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:30.195 19:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:30.195 19:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.195 19:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:30.452 19:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:30.453 19:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:30.453 19:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.453 19:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:30.710 19:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:30.710 19:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:30.711 19:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.711 19:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:30.968 19:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:30.968 19:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:30.968 19:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.968 19:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:31.226 19:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:31.226 19:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:31.226 19:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.226 19:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:31.484 19:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:31.484 19:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:21:31.484 19:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:31.742 19:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:32.000 19:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:21:32.933 19:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:21:32.933 19:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:32.933 19:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.934 19:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:33.191 19:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:33.191 19:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:33.191 19:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.191 19:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:33.449 19:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:33.449 19:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:33.449 19:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.449 19:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:33.707 19:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:33.707 19:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:33.707 19:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.707 19:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:33.965 19:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:33.965 19:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:33.965 19:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:33.965 19:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:34.223 19:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:34.223 19:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:34.223 19:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:34.223 19:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:34.481 19:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:34.481 19:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:21:34.482 19:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:34.740 19:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:34.998 19:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:21:35.988 19:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:21:35.988 19:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:35.988 19:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:35.988 19:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:36.245 19:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:36.245 19:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:36.245 19:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.245 19:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:36.503 19:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:36.503 19:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:36.503 19:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.503 19:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:36.761 19:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:36.761 19:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:36.761 19:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.761 19:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:37.019 19:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:37.019 19:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:37.019 19:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:37.019 19:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:37.276 19:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:37.276 19:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:37.276 19:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:37.276 19:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:37.534 19:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:37.534 19:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:21:37.534 19:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:37.791 19:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:38.049 19:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:21:38.982 19:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:21:38.982 19:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:38.982 19:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:38.982 19:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:39.240 19:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:39.240 19:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:39.240 19:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.240 19:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:39.498 19:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:39.498 19:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:39.498 19:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.498 19:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:39.756 19:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:39.756 19:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:39.756 19:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.756 19:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:40.013 19:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:40.013 19:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:40.013 19:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:40.013 19:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:40.272 19:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:40.272 19:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:40.272 19:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:40.272 19:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:40.531 19:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:40.531 19:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:21:40.789 19:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:21:40.789 19:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:41.047 19:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:41.304 19:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:21:42.237 19:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:21:42.237 19:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:42.237 19:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.237 19:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:42.494 19:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:42.494 19:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:42.494 19:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.494 19:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:42.752 19:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:42.752 19:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:42.752 19:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.752 19:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:43.009 19:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:43.009 19:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:43.009 19:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:43.009 19:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:43.266 19:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:43.266 19:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:43.266 19:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:43.266 19:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:43.524 19:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:43.524 19:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:43.524 19:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:43.524 19:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:43.782 19:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:43.782 19:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:21:43.782 19:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:44.040 19:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:44.297 19:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:21:45.230 19:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:21:45.230 19:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:45.230 19:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.230 19:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:45.487 19:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:45.487 19:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:45.487 19:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.487 19:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:45.745 19:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:45.745 19:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:45.745 19:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.745 19:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:46.002 19:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:46.002 19:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:46.002 19:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.002 19:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:46.258 19:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:46.259 19:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:46.259 19:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.259 19:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:46.517 19:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:46.517 19:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:46.517 19:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.517 19:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:46.775 19:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:46.775 19:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:21:46.775 19:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:47.034 19:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:21:47.292 19:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:21:48.227 19:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:21:48.227 19:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:48.227 19:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.227 19:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:48.486 19:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:48.486 19:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:48.486 19:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.486 19:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:48.744 19:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:48.744 19:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:48.744 19:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.745 19:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:49.003 19:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.003 19:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:49.003 19:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.003 19:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:49.260 19:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.260 19:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:49.260 19:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.260 19:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:49.518 19:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.518 19:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:49.518 19:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.518 19:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:49.780 19:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.780 19:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:21:49.780 19:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:50.040 19:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:50.297 19:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:21:51.231 19:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:21:51.231 19:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:51.231 19:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.231 19:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:51.489 19:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:51.489 19:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:51.489 19:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.489 19:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:51.748 19:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:51.748 19:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:51.748 19:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.748 19:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:52.005 19:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:52.005 19:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:52.005 19:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:52.005 19:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:52.263 19:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:52.263 19:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:52.263 19:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:52.263 19:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:52.522 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:52.522 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:52.522 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:52.522 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:52.809 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:52.809 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3218879 00:21:52.809 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3218879 ']' 00:21:52.809 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3218879 00:21:52.809 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:21:52.809 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:52.809 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3218879 00:21:52.809 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:52.809 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:52.809 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3218879' 00:21:52.809 killing process with pid 3218879 00:21:52.809 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3218879 00:21:52.809 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3218879 00:21:53.079 Connection closed with partial response: 00:21:53.079 00:21:53.079 00:21:53.079 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3218879 00:21:53.079 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:53.079 [2024-07-24 19:03:56.448419] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:21:53.079 [2024-07-24 19:03:56.448500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3218879 ] 00:21:53.079 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.080 [2024-07-24 19:03:56.507309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.080 [2024-07-24 19:03:56.616529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:53.080 Running I/O for 90 seconds... 00:21:53.080 [2024-07-24 19:04:12.222948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.080 [2024-07-24 19:04:12.223008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.223070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.223092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.223125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.223144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.223167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.223183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.223206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.223222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.223244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.223260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.223283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.223299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.223321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.223338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.223360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.223377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.223399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.223415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.223438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.223464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.223488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.223505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.223527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.223544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.223566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.223582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.223604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.223620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.223642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.223658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.223681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.223697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.224232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.224256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.224285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.224303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.224327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.224344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.224367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.224383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.224406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.224423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.224447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.224463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.224492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.224509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.224532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.224548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.224572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.224588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.224611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.224627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.224650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.224681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.224703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.224719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.224741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.224756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.224779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.224794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.224817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.224832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.224854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.224870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.224893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.224909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.224982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.225002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.225034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.225051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:53.080 [2024-07-24 19:04:12.225076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.080 [2024-07-24 19:04:12.225114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.225142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.225160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.225184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.225201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.225225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.225242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.225266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.225283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.225307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.225323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.225348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.225364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.225405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.225422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.225446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.225462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.225485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.225501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.225525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.225540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.225563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.225583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.225608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.225624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.225648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.225664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.225687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.225703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.225727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.225743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.225783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.225799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.225824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.225841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.225865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.225882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.225907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.225924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.225949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.225965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.225989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.226006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.226032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.226049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.226074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.226094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.226127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.226146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.226170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.226187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.226212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.226228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.226253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.226270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.226295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.226311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.226335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.226351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.226376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.226392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.226417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.226433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.226457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.226474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.226498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.226515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.226540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.226556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.226595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.226612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.226641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.226658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:53.081 [2024-07-24 19:04:12.226682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.081 [2024-07-24 19:04:12.226697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.226722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.226738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.226761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.226778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.226801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.226817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.226841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.226857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.226881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.226897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.226921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.226938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.226962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.226978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.227216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.227239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.227273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.227292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.227321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.227337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.227371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.227388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.227418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.227434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.227463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.227480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.227525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.227542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.227570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.227586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.227613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.227629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.227657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.227674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.227702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.227718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.227746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.227762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.227790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.227806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.227833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.227850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.227877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.227894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.227921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.227941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.227969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.227986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.228013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.228030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.228057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.228073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.228126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.228146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.228174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.228192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.228220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.228236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.228265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.228282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.228310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.228327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.228355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.228372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.228401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.228434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.228463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.228479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.228506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.228527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.228555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.228572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.228599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.228615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:53.082 [2024-07-24 19:04:12.228643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.082 [2024-07-24 19:04:12.228659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:12.228686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:12.228702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:12.228730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:12.228746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:12.228774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:12.228790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:12.228817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:12.228834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:12.228861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:12.228878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:12.228905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:12.228921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:12.228949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:12.228965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:12.228993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:12.229010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:12.229037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:12.229053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:12.229100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:12.229128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:12.229162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:12.229180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:12.229208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:12.229225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:12.229253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:12.229270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:12.229298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:12.229315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:12.229343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:12.229360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:12.229404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:12.229421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:27.802678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:27.802745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:27.802783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:27.802802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:27.802825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:27.802842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:27.802865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:27.802881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:27.802904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:27.802920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:27.802950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:27.802967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:27.802990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:27.803006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:27.803028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:27.803044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:27.803066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:27.803083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:27.803112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:27.803131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:27.803163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:27.803179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:27.803201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:27.803217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:27.803239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:27.803255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:27.803276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:27.803292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:27.803314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:27.803330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:27.803352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:27.803368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:27.803404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:27.803420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:27.803442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:27.803465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:27.803487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:27.803503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:27.803524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:27.803540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:27.803560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.083 [2024-07-24 19:04:27.803575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:53.083 [2024-07-24 19:04:27.803598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.803615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.803637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.803652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.803690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.803707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.803730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.803748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.803770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.803787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.803809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.803825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.803847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.803864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.803886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.803902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.803924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.803945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.805232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.805258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.805285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.805304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.805326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.805342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.805364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.805380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.805411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.805427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.805449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.805465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.805488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.805503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.805525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.805541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.805563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.084 [2024-07-24 19:04:27.805580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.805602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.084 [2024-07-24 19:04:27.805618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.805640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.084 [2024-07-24 19:04:27.805655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.805677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.084 [2024-07-24 19:04:27.805692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.805720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.084 [2024-07-24 19:04:27.805736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.805758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.084 [2024-07-24 19:04:27.805789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.805811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.084 [2024-07-24 19:04:27.805826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.805847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.805863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.805883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.805899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.805920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.805935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.805956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.805971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.805992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.806007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.806028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.084 [2024-07-24 19:04:27.806043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:53.084 [2024-07-24 19:04:27.806064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.085 [2024-07-24 19:04:27.806079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.806100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.085 [2024-07-24 19:04:27.806152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.806175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.085 [2024-07-24 19:04:27.806192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.806218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.085 [2024-07-24 19:04:27.806235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.806256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.085 [2024-07-24 19:04:27.806273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.806294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.085 [2024-07-24 19:04:27.806310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.806331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.806347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.806369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.806385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.806406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.806422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.806443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.806459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.806482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.806498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.806519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.806535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.806557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.806573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.807944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.807969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.807997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.808014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.808059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.808098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.808155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:68016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.808193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.808229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:68080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.808266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.808304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:68144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.808341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:68176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.808379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.808416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.808453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:68272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.808490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:68304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.808547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.808587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:68368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.808624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.085 [2024-07-24 19:04:27.808661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.085 [2024-07-24 19:04:27.808698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.085 [2024-07-24 19:04:27.808753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.085 [2024-07-24 19:04:27.808791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.085 [2024-07-24 19:04:27.808828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.085 [2024-07-24 19:04:27.808866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.085 [2024-07-24 19:04:27.808903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.085 [2024-07-24 19:04:27.808941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:53.085 [2024-07-24 19:04:27.808962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.808978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.809000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.809016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.809042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.809059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.809080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.809097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.809128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.809145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.809167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.809183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.809204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.809220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.809242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.809258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.811555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:68448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.086 [2024-07-24 19:04:27.811582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.811611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:68480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.086 [2024-07-24 19:04:27.811629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.811652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:68512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.086 [2024-07-24 19:04:27.811668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.811690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:68544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.086 [2024-07-24 19:04:27.811706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.811743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.811758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.811780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.811796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.811822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.811839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.811860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:68560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.086 [2024-07-24 19:04:27.811875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.811896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:68592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.086 [2024-07-24 19:04:27.811926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.811950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:68624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.086 [2024-07-24 19:04:27.811966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.811987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.812003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.812025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.812040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.812062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.812078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.812100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.812125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.812148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.812164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.812186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.812202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.812223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.812239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.812261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.086 [2024-07-24 19:04:27.812276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.812303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.086 [2024-07-24 19:04:27.812320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.812342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.086 [2024-07-24 19:04:27.812358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.812380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.086 [2024-07-24 19:04:27.812396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.812418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.812434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.812456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.812472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.812493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.812524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.812547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.812562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.812583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.812598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.812619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.086 [2024-07-24 19:04:27.812635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.812656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.086 [2024-07-24 19:04:27.812671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.812692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.086 [2024-07-24 19:04:27.812707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.812744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.086 [2024-07-24 19:04:27.812761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:53.086 [2024-07-24 19:04:27.813717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:67880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.086 [2024-07-24 19:04:27.813747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.813776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.087 [2024-07-24 19:04:27.813794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.813817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:68008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.087 [2024-07-24 19:04:27.813833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.813855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.087 [2024-07-24 19:04:27.813871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.813892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:68136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.087 [2024-07-24 19:04:27.813909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.813931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:68200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.087 [2024-07-24 19:04:27.813946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.813968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:68264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.087 [2024-07-24 19:04:27.813984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.087 [2024-07-24 19:04:27.814021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.087 [2024-07-24 19:04:27.814058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.087 [2024-07-24 19:04:27.814096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.087 [2024-07-24 19:04:27.814152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.087 [2024-07-24 19:04:27.814189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.087 [2024-07-24 19:04:27.814231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.087 [2024-07-24 19:04:27.814270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.087 [2024-07-24 19:04:27.814307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:68872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.087 [2024-07-24 19:04:27.814344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.087 [2024-07-24 19:04:27.814382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.087 [2024-07-24 19:04:27.814420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:68680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.087 [2024-07-24 19:04:27.814457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.087 [2024-07-24 19:04:27.814494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.087 [2024-07-24 19:04:27.814531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.087 [2024-07-24 19:04:27.814568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:68080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.087 [2024-07-24 19:04:27.814606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:68144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.087 [2024-07-24 19:04:27.814643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.087 [2024-07-24 19:04:27.814680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:68272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.087 [2024-07-24 19:04:27.814723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.087 [2024-07-24 19:04:27.814760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:68400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.087 [2024-07-24 19:04:27.814798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.087 [2024-07-24 19:04:27.814835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.087 [2024-07-24 19:04:27.814873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.087 [2024-07-24 19:04:27.814911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.087 [2024-07-24 19:04:27.814948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.814970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.087 [2024-07-24 19:04:27.814986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.815007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.087 [2024-07-24 19:04:27.815023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.815045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.087 [2024-07-24 19:04:27.815062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:53.087 [2024-07-24 19:04:27.815781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.087 [2024-07-24 19:04:27.815805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.815833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:68720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.088 [2024-07-24 19:04:27.815850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.815878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:68328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.088 [2024-07-24 19:04:27.815895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.815917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:68392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.088 [2024-07-24 19:04:27.815933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.815955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:68440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.088 [2024-07-24 19:04:27.815971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.815992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:68504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.088 [2024-07-24 19:04:27.816008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.816030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.088 [2024-07-24 19:04:27.816045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.816083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.088 [2024-07-24 19:04:27.816099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.816144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.088 [2024-07-24 19:04:27.816161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.816184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.088 [2024-07-24 19:04:27.816199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.816221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.088 [2024-07-24 19:04:27.816237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.816258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:68624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.088 [2024-07-24 19:04:27.816274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.816296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.088 [2024-07-24 19:04:27.816312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.816334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.088 [2024-07-24 19:04:27.816350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.816371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.088 [2024-07-24 19:04:27.816391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.816413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.088 [2024-07-24 19:04:27.816429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.816451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.088 [2024-07-24 19:04:27.816468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.816489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.088 [2024-07-24 19:04:27.816505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.816526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.088 [2024-07-24 19:04:27.816542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.816564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.088 [2024-07-24 19:04:27.816580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.816616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.088 [2024-07-24 19:04:27.816632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.816653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.088 [2024-07-24 19:04:27.816685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.817528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.088 [2024-07-24 19:04:27.817553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.817579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.088 [2024-07-24 19:04:27.817596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.817618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.088 [2024-07-24 19:04:27.817633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.817654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.088 [2024-07-24 19:04:27.817669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.817690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.088 [2024-07-24 19:04:27.817710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.817749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.088 [2024-07-24 19:04:27.817765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.817786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.088 [2024-07-24 19:04:27.817801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.817823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:68072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.088 [2024-07-24 19:04:27.817839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.817860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:68200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.088 [2024-07-24 19:04:27.817876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.817897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.088 [2024-07-24 19:04:27.817913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.817934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.088 [2024-07-24 19:04:27.817950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.817971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.088 [2024-07-24 19:04:27.817987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.818009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.088 [2024-07-24 19:04:27.818025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.818046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.088 [2024-07-24 19:04:27.818062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.818083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.088 [2024-07-24 19:04:27.818099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.818130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.088 [2024-07-24 19:04:27.818147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:53.088 [2024-07-24 19:04:27.818169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.088 [2024-07-24 19:04:27.818184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.818213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:68144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.089 [2024-07-24 19:04:27.818230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.818252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:68272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.089 [2024-07-24 19:04:27.818267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.818289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.089 [2024-07-24 19:04:27.818305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.818326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.089 [2024-07-24 19:04:27.818342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.818363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.089 [2024-07-24 19:04:27.818379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.818401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.089 [2024-07-24 19:04:27.818417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.819456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.089 [2024-07-24 19:04:27.819481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.819509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.089 [2024-07-24 19:04:27.819526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.819549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.089 [2024-07-24 19:04:27.819565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.819586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.089 [2024-07-24 19:04:27.819602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.819624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.089 [2024-07-24 19:04:27.819640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.819661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.089 [2024-07-24 19:04:27.819677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.819704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:68392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.089 [2024-07-24 19:04:27.819720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.819742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:68504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.089 [2024-07-24 19:04:27.819758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.819779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.089 [2024-07-24 19:04:27.819795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.819816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.089 [2024-07-24 19:04:27.819833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.819854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:68624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.089 [2024-07-24 19:04:27.819870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.819891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.089 [2024-07-24 19:04:27.819907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.819929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.089 [2024-07-24 19:04:27.819945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.819966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.089 [2024-07-24 19:04:27.819982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.820003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.089 [2024-07-24 19:04:27.820019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.820041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.089 [2024-07-24 19:04:27.820057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.820078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.089 [2024-07-24 19:04:27.820094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.820126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.089 [2024-07-24 19:04:27.820143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.820165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.089 [2024-07-24 19:04:27.820185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.820207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.089 [2024-07-24 19:04:27.820223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.820245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.089 [2024-07-24 19:04:27.820261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.820283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.089 [2024-07-24 19:04:27.820298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.820319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.089 [2024-07-24 19:04:27.820335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.820357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:68280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.089 [2024-07-24 19:04:27.820373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.820394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.089 [2024-07-24 19:04:27.820410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:53.089 [2024-07-24 19:04:27.820431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.090 [2024-07-24 19:04:27.820447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.820484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.090 [2024-07-24 19:04:27.820500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.820522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.090 [2024-07-24 19:04:27.820537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.821816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.090 [2024-07-24 19:04:27.821841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.821883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.090 [2024-07-24 19:04:27.821901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.821923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.090 [2024-07-24 19:04:27.821944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.821967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.090 [2024-07-24 19:04:27.821983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.822005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.090 [2024-07-24 19:04:27.822021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.822058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:68400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.090 [2024-07-24 19:04:27.822074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.822117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.090 [2024-07-24 19:04:27.822136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.822159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:68640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.090 [2024-07-24 19:04:27.822175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.822197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:68696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.090 [2024-07-24 19:04:27.822213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.822234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.090 [2024-07-24 19:04:27.822250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.822272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.090 [2024-07-24 19:04:27.822288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.822309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.090 [2024-07-24 19:04:27.822325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.822347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.090 [2024-07-24 19:04:27.822362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.822384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.090 [2024-07-24 19:04:27.822400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.822421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.090 [2024-07-24 19:04:27.822437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.822463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.090 [2024-07-24 19:04:27.822480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.822502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.090 [2024-07-24 19:04:27.822518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.822539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.090 [2024-07-24 19:04:27.822555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.822577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.090 [2024-07-24 19:04:27.822593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.822629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:68720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.090 [2024-07-24 19:04:27.822644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.822665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:68504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.090 [2024-07-24 19:04:27.822680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.822702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.090 [2024-07-24 19:04:27.822717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.823438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.090 [2024-07-24 19:04:27.823463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.823505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.090 [2024-07-24 19:04:27.823523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.823545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.090 [2024-07-24 19:04:27.823576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.823597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.090 [2024-07-24 19:04:27.823612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.823632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.090 [2024-07-24 19:04:27.823647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.823673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.090 [2024-07-24 19:04:27.823688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.823708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.090 [2024-07-24 19:04:27.823723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.823743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.090 [2024-07-24 19:04:27.823758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.823779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:68072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.090 [2024-07-24 19:04:27.823794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.825657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:69160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.090 [2024-07-24 19:04:27.825697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.825725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.090 [2024-07-24 19:04:27.825743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:53.090 [2024-07-24 19:04:27.825765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.091 [2024-07-24 19:04:27.825781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.825802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.091 [2024-07-24 19:04:27.825818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.825840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.091 [2024-07-24 19:04:27.825856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.825877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.091 [2024-07-24 19:04:27.825893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.825915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.091 [2024-07-24 19:04:27.825931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.825953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.091 [2024-07-24 19:04:27.825968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.825990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.091 [2024-07-24 19:04:27.826010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.826033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.091 [2024-07-24 19:04:27.826049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.826071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:68792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.091 [2024-07-24 19:04:27.826087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.826115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.091 [2024-07-24 19:04:27.826133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.826154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.091 [2024-07-24 19:04:27.826170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.826191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.091 [2024-07-24 19:04:27.826207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.826229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:68400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.091 [2024-07-24 19:04:27.826244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.826266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.091 [2024-07-24 19:04:27.826281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.826302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.091 [2024-07-24 19:04:27.826318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.826339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:69080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.091 [2024-07-24 19:04:27.826355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.826377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.091 [2024-07-24 19:04:27.826392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.826433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.091 [2024-07-24 19:04:27.826448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.826483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.091 [2024-07-24 19:04:27.826502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.826524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.091 [2024-07-24 19:04:27.826539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.826559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.091 [2024-07-24 19:04:27.826574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.826594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.091 [2024-07-24 19:04:27.826608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.826629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.091 [2024-07-24 19:04:27.826643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.826663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.091 [2024-07-24 19:04:27.826678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.826698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:68248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.091 [2024-07-24 19:04:27.826712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.826733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.091 [2024-07-24 19:04:27.826747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.826767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.091 [2024-07-24 19:04:27.826782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.826802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.091 [2024-07-24 19:04:27.826817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.826838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.091 [2024-07-24 19:04:27.826853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.827430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.091 [2024-07-24 19:04:27.827453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.827480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.091 [2024-07-24 19:04:27.827498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.827525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.091 [2024-07-24 19:04:27.827542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.827563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.091 [2024-07-24 19:04:27.827579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.827601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.091 [2024-07-24 19:04:27.827617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.827638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.091 [2024-07-24 19:04:27.827654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:53.091 [2024-07-24 19:04:27.827675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.092 [2024-07-24 19:04:27.827706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.827728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.092 [2024-07-24 19:04:27.827744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.827780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.092 [2024-07-24 19:04:27.827795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.827815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.092 [2024-07-24 19:04:27.827831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.829279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.092 [2024-07-24 19:04:27.829304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.829331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.092 [2024-07-24 19:04:27.829349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.829372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.092 [2024-07-24 19:04:27.829388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.829409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.092 [2024-07-24 19:04:27.829425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.829452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.092 [2024-07-24 19:04:27.829469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.829491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:68184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.092 [2024-07-24 19:04:27.829507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.829529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.092 [2024-07-24 19:04:27.829545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.829566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.092 [2024-07-24 19:04:27.829582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.829603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.092 [2024-07-24 19:04:27.829619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.829640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.092 [2024-07-24 19:04:27.829656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.829677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.092 [2024-07-24 19:04:27.829693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.829714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.092 [2024-07-24 19:04:27.829730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.829751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.092 [2024-07-24 19:04:27.829767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.829788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.092 [2024-07-24 19:04:27.829803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.829825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.092 [2024-07-24 19:04:27.829841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.829863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.092 [2024-07-24 19:04:27.829878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.829904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.092 [2024-07-24 19:04:27.829921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.829942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:68640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.092 [2024-07-24 19:04:27.829957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.829979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.092 [2024-07-24 19:04:27.829994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.830016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.092 [2024-07-24 19:04:27.830048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.830070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:68720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.092 [2024-07-24 19:04:27.830085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.830130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.092 [2024-07-24 19:04:27.830148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.830170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.092 [2024-07-24 19:04:27.830187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.830208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.092 [2024-07-24 19:04:27.830224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.830246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.092 [2024-07-24 19:04:27.830262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.830283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.092 [2024-07-24 19:04:27.830299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.830320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.092 [2024-07-24 19:04:27.830336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.830358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.092 [2024-07-24 19:04:27.830373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.830395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.092 [2024-07-24 19:04:27.830415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.830438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.092 [2024-07-24 19:04:27.830454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.830475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.092 [2024-07-24 19:04:27.830491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.830513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.092 [2024-07-24 19:04:27.830528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.830550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.092 [2024-07-24 19:04:27.830567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.832045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.092 [2024-07-24 19:04:27.832070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:53.092 [2024-07-24 19:04:27.832098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.092 [2024-07-24 19:04:27.832126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.093 [2024-07-24 19:04:27.832175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.093 [2024-07-24 19:04:27.832213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.093 [2024-07-24 19:04:27.832250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.093 [2024-07-24 19:04:27.832288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.093 [2024-07-24 19:04:27.832326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.093 [2024-07-24 19:04:27.832368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.093 [2024-07-24 19:04:27.832407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.093 [2024-07-24 19:04:27.832460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.093 [2024-07-24 19:04:27.832496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.093 [2024-07-24 19:04:27.832533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.093 [2024-07-24 19:04:27.832569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.093 [2024-07-24 19:04:27.832621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:69064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.093 [2024-07-24 19:04:27.832656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.093 [2024-07-24 19:04:27.832690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.093 [2024-07-24 19:04:27.832725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.093 [2024-07-24 19:04:27.832760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.093 [2024-07-24 19:04:27.832795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.093 [2024-07-24 19:04:27.832830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.093 [2024-07-24 19:04:27.832869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.093 [2024-07-24 19:04:27.832905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.093 [2024-07-24 19:04:27.832940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.093 [2024-07-24 19:04:27.832974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.832995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.093 [2024-07-24 19:04:27.833009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.833030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.093 [2024-07-24 19:04:27.833044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.833064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.093 [2024-07-24 19:04:27.833079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.833126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.093 [2024-07-24 19:04:27.833143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.833165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.093 [2024-07-24 19:04:27.833181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.833202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.093 [2024-07-24 19:04:27.833218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.833239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.093 [2024-07-24 19:04:27.833255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.833277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.093 [2024-07-24 19:04:27.833293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.835523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.093 [2024-07-24 19:04:27.835546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.835588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.093 [2024-07-24 19:04:27.835604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:53.093 [2024-07-24 19:04:27.835625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.093 [2024-07-24 19:04:27.835640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.835660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.094 [2024-07-24 19:04:27.835690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.835711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:68312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.094 [2024-07-24 19:04:27.835727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.835765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.094 [2024-07-24 19:04:27.835781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.835802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.094 [2024-07-24 19:04:27.835818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.835840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.094 [2024-07-24 19:04:27.835855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.835877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.094 [2024-07-24 19:04:27.835892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.835914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.094 [2024-07-24 19:04:27.835930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.835951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:69688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.094 [2024-07-24 19:04:27.835967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.835988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:69704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.094 [2024-07-24 19:04:27.836004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.836026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:69400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.094 [2024-07-24 19:04:27.836049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.836072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.094 [2024-07-24 19:04:27.836088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.836118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.094 [2024-07-24 19:04:27.836136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.836159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.094 [2024-07-24 19:04:27.836175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.836196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.094 [2024-07-24 19:04:27.836212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.836233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.094 [2024-07-24 19:04:27.836249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.836271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.094 [2024-07-24 19:04:27.836286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.836307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.094 [2024-07-24 19:04:27.836323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.836345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.094 [2024-07-24 19:04:27.836361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.836382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.094 [2024-07-24 19:04:27.836397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.836419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.094 [2024-07-24 19:04:27.836434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.836456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.094 [2024-07-24 19:04:27.836471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.836493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.094 [2024-07-24 19:04:27.836513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.836535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.094 [2024-07-24 19:04:27.836552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.836574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.094 [2024-07-24 19:04:27.836589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.836627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.094 [2024-07-24 19:04:27.836643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.837861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.094 [2024-07-24 19:04:27.837885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.837913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.094 [2024-07-24 19:04:27.837931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.837953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.094 [2024-07-24 19:04:27.837970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.837991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.094 [2024-07-24 19:04:27.838007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.838029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:69472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.094 [2024-07-24 19:04:27.838045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.838067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.094 [2024-07-24 19:04:27.838083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.838111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.094 [2024-07-24 19:04:27.838130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.838152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.094 [2024-07-24 19:04:27.838169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.838190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.094 [2024-07-24 19:04:27.838206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.838233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.094 [2024-07-24 19:04:27.838250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:53.094 [2024-07-24 19:04:27.838271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.095 [2024-07-24 19:04:27.838287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.838309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:69760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.095 [2024-07-24 19:04:27.838325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.838347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:69776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.095 [2024-07-24 19:04:27.838363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.838384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.095 [2024-07-24 19:04:27.838400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.838421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.095 [2024-07-24 19:04:27.838437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.838459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.095 [2024-07-24 19:04:27.838475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.838496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.095 [2024-07-24 19:04:27.838512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.838534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.095 [2024-07-24 19:04:27.838549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.838571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.095 [2024-07-24 19:04:27.838587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.838609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.095 [2024-07-24 19:04:27.838624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.838646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.095 [2024-07-24 19:04:27.838662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.838688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.095 [2024-07-24 19:04:27.838704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.838726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.095 [2024-07-24 19:04:27.838742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.838764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.095 [2024-07-24 19:04:27.838780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.838802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.095 [2024-07-24 19:04:27.838818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.839469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.095 [2024-07-24 19:04:27.839494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.839522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.095 [2024-07-24 19:04:27.839540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.839562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.095 [2024-07-24 19:04:27.839578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.839599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.095 [2024-07-24 19:04:27.839615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.839652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.095 [2024-07-24 19:04:27.839668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.839703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.095 [2024-07-24 19:04:27.839718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.839739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.095 [2024-07-24 19:04:27.839754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.839774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.095 [2024-07-24 19:04:27.839788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.839808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.095 [2024-07-24 19:04:27.839828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.839849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.095 [2024-07-24 19:04:27.839864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.839884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.095 [2024-07-24 19:04:27.839899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.839919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.095 [2024-07-24 19:04:27.839934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.839954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.095 [2024-07-24 19:04:27.839969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.839989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.095 [2024-07-24 19:04:27.840003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.840024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.095 [2024-07-24 19:04:27.840039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.840398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.095 [2024-07-24 19:04:27.840422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.840449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.095 [2024-07-24 19:04:27.840466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.840489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.095 [2024-07-24 19:04:27.840505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.840527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.095 [2024-07-24 19:04:27.840543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.840564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.095 [2024-07-24 19:04:27.840580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.840601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.095 [2024-07-24 19:04:27.840622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:53.095 [2024-07-24 19:04:27.840645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.095 [2024-07-24 19:04:27.840661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.840682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.096 [2024-07-24 19:04:27.840699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.840721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.096 [2024-07-24 19:04:27.840736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.840758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.096 [2024-07-24 19:04:27.840773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.840811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:69632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.096 [2024-07-24 19:04:27.840827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.840864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.096 [2024-07-24 19:04:27.840880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.840902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.096 [2024-07-24 19:04:27.840918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.840939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.096 [2024-07-24 19:04:27.840955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.840976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.096 [2024-07-24 19:04:27.840992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.841013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.096 [2024-07-24 19:04:27.841029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.841051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.096 [2024-07-24 19:04:27.841066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.841088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.096 [2024-07-24 19:04:27.841111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.841141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.096 [2024-07-24 19:04:27.841159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.841787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.096 [2024-07-24 19:04:27.841811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.841838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.096 [2024-07-24 19:04:27.841855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.841893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.096 [2024-07-24 19:04:27.841909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.841931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.096 [2024-07-24 19:04:27.841946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.841966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.096 [2024-07-24 19:04:27.841982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.842018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.096 [2024-07-24 19:04:27.842034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.842056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.096 [2024-07-24 19:04:27.842072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.842093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.096 [2024-07-24 19:04:27.842117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.842141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.096 [2024-07-24 19:04:27.842157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.842179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.096 [2024-07-24 19:04:27.842195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.842217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.096 [2024-07-24 19:04:27.842233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.842259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.096 [2024-07-24 19:04:27.842276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.842298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.096 [2024-07-24 19:04:27.842314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.842335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.096 [2024-07-24 19:04:27.842351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.842372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.096 [2024-07-24 19:04:27.842388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.842410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.096 [2024-07-24 19:04:27.842426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.843858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.096 [2024-07-24 19:04:27.843883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.843910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.096 [2024-07-24 19:04:27.843928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.843950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.096 [2024-07-24 19:04:27.843966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.843987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.096 [2024-07-24 19:04:27.844003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.844025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.096 [2024-07-24 19:04:27.844041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.844063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.096 [2024-07-24 19:04:27.844078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.844100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.096 [2024-07-24 19:04:27.844125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.844148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.096 [2024-07-24 19:04:27.844169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:53.096 [2024-07-24 19:04:27.844192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.096 [2024-07-24 19:04:27.844208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.844230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.097 [2024-07-24 19:04:27.844245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.844267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.844282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.844304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.844320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.844342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.844357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.844394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.844410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.844432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.097 [2024-07-24 19:04:27.844462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.844483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.844498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.844518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.844533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.844553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.097 [2024-07-24 19:04:27.844568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.844588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.097 [2024-07-24 19:04:27.844603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.844623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.844642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.844663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.844678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.844698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.844713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.844733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.844748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.846870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.846897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.846926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.846944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.846967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.846984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.847005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.847021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.847043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.847060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.847082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.847098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.847130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.847148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.847170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.847186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.847208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.847223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.847251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.097 [2024-07-24 19:04:27.847268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.847290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.097 [2024-07-24 19:04:27.847306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.847328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.097 [2024-07-24 19:04:27.847344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.847366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.097 [2024-07-24 19:04:27.847382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.847404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.097 [2024-07-24 19:04:27.847420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.847442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.097 [2024-07-24 19:04:27.847459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.847480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.097 [2024-07-24 19:04:27.847496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.847533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.097 [2024-07-24 19:04:27.847549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.847586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.097 [2024-07-24 19:04:27.847602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.847624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.847639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.847659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.847675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.847696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.097 [2024-07-24 19:04:27.847712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.847736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.097 [2024-07-24 19:04:27.847753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.847790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.847806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:53.097 [2024-07-24 19:04:27.847827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.097 [2024-07-24 19:04:27.847843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.847881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.098 [2024-07-24 19:04:27.847897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.847919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.098 [2024-07-24 19:04:27.847935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.847957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.098 [2024-07-24 19:04:27.847974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.849216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.098 [2024-07-24 19:04:27.849242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.849270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.098 [2024-07-24 19:04:27.849288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.849311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.098 [2024-07-24 19:04:27.849328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.849350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.098 [2024-07-24 19:04:27.849366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.849389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.098 [2024-07-24 19:04:27.849405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.849427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.098 [2024-07-24 19:04:27.849443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.849466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.098 [2024-07-24 19:04:27.849487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.849510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.098 [2024-07-24 19:04:27.849526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.849549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.098 [2024-07-24 19:04:27.849566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.849588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:68456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.098 [2024-07-24 19:04:27.849604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.849626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.098 [2024-07-24 19:04:27.849642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.849664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.098 [2024-07-24 19:04:27.849680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.849702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.098 [2024-07-24 19:04:27.849718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.849740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.098 [2024-07-24 19:04:27.849756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.849778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.098 [2024-07-24 19:04:27.849794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.849816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.098 [2024-07-24 19:04:27.849833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.849854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.098 [2024-07-24 19:04:27.849870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.849893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.098 [2024-07-24 19:04:27.849909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.850793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.098 [2024-07-24 19:04:27.850821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.850866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.098 [2024-07-24 19:04:27.850883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.850905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.098 [2024-07-24 19:04:27.850921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.850943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.098 [2024-07-24 19:04:27.850959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.850981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.098 [2024-07-24 19:04:27.850997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.851018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.098 [2024-07-24 19:04:27.851034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.851056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.098 [2024-07-24 19:04:27.851071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.851093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.098 [2024-07-24 19:04:27.851121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.851145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.098 [2024-07-24 19:04:27.851162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:53.098 [2024-07-24 19:04:27.851183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.099 [2024-07-24 19:04:27.851199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.851221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.099 [2024-07-24 19:04:27.851237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.851258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.099 [2024-07-24 19:04:27.851274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.851296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.099 [2024-07-24 19:04:27.851312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.851338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.099 [2024-07-24 19:04:27.851355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.851377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.099 [2024-07-24 19:04:27.851394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.851416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.099 [2024-07-24 19:04:27.851432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.851807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.099 [2024-07-24 19:04:27.851831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.851857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.099 [2024-07-24 19:04:27.851875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.851897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.099 [2024-07-24 19:04:27.851913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.851935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.099 [2024-07-24 19:04:27.851951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.851973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.099 [2024-07-24 19:04:27.851988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.852010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.099 [2024-07-24 19:04:27.852026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.852064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.099 [2024-07-24 19:04:27.852079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.852127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.099 [2024-07-24 19:04:27.852146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.852169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.099 [2024-07-24 19:04:27.852186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.852215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.099 [2024-07-24 19:04:27.852232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.852253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.099 [2024-07-24 19:04:27.852269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.852290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.099 [2024-07-24 19:04:27.852306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.852328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.099 [2024-07-24 19:04:27.852343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.852365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.099 [2024-07-24 19:04:27.852381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.852402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.099 [2024-07-24 19:04:27.852431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.852453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.099 [2024-07-24 19:04:27.852467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.852506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.099 [2024-07-24 19:04:27.852522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.852543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.099 [2024-07-24 19:04:27.852559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.852580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.099 [2024-07-24 19:04:27.852596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.852617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.099 [2024-07-24 19:04:27.852633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.852655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.099 [2024-07-24 19:04:27.852671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.853162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.099 [2024-07-24 19:04:27.853190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.853219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.099 [2024-07-24 19:04:27.853237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.853259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.099 [2024-07-24 19:04:27.853275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.853296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.099 [2024-07-24 19:04:27.853312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.853334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.099 [2024-07-24 19:04:27.853350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.853372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.099 [2024-07-24 19:04:27.853387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.853423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.099 [2024-07-24 19:04:27.853439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.853460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.099 [2024-07-24 19:04:27.853474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.853511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.099 [2024-07-24 19:04:27.853526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.853562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.099 [2024-07-24 19:04:27.853578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:53.099 [2024-07-24 19:04:27.853600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.099 [2024-07-24 19:04:27.853617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.853638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.100 [2024-07-24 19:04:27.853654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.853676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.100 [2024-07-24 19:04:27.853696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.853719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.100 [2024-07-24 19:04:27.853735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.853757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.100 [2024-07-24 19:04:27.853773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.853795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.100 [2024-07-24 19:04:27.853811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.855541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.100 [2024-07-24 19:04:27.855579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.855606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.100 [2024-07-24 19:04:27.855639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.855662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.100 [2024-07-24 19:04:27.855678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.855699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.100 [2024-07-24 19:04:27.855715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.855737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.100 [2024-07-24 19:04:27.855753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.855774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.100 [2024-07-24 19:04:27.855790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.855812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.100 [2024-07-24 19:04:27.855828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.855849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.100 [2024-07-24 19:04:27.855879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.855900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.100 [2024-07-24 19:04:27.855919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.855941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.100 [2024-07-24 19:04:27.855956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.855976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.100 [2024-07-24 19:04:27.855991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.856011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.100 [2024-07-24 19:04:27.856026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.856046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.100 [2024-07-24 19:04:27.856061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.856081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.100 [2024-07-24 19:04:27.856095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.856142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.100 [2024-07-24 19:04:27.856159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.856180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.100 [2024-07-24 19:04:27.856195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.856216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.100 [2024-07-24 19:04:27.856231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.856253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.100 [2024-07-24 19:04:27.856268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.856288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.100 [2024-07-24 19:04:27.856304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.856324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.100 [2024-07-24 19:04:27.856340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.856361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.100 [2024-07-24 19:04:27.856377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.856402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.100 [2024-07-24 19:04:27.856432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.856454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.100 [2024-07-24 19:04:27.856469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.857905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.100 [2024-07-24 19:04:27.857931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.857973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.100 [2024-07-24 19:04:27.857993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.858017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.100 [2024-07-24 19:04:27.858033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.858055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.100 [2024-07-24 19:04:27.858071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.858093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.100 [2024-07-24 19:04:27.858117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.858141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.100 [2024-07-24 19:04:27.858157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.858179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.100 [2024-07-24 19:04:27.858195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.858217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.100 [2024-07-24 19:04:27.858233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:53.100 [2024-07-24 19:04:27.858254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.100 [2024-07-24 19:04:27.858285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:53.101 [2024-07-24 19:04:27.859331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.101 [2024-07-24 19:04:27.859370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:53.101 [2024-07-24 19:04:27.859403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.101 [2024-07-24 19:04:27.859421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:53.101 [2024-07-24 19:04:27.859443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.101 [2024-07-24 19:04:27.859460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:53.101 [2024-07-24 19:04:27.859482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.101 [2024-07-24 19:04:27.859498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:53.101 [2024-07-24 19:04:27.859520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.101 [2024-07-24 19:04:27.859535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:53.101 [2024-07-24 19:04:27.859557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.101 [2024-07-24 19:04:27.859572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:53.101 [2024-07-24 19:04:27.859594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.101 [2024-07-24 19:04:27.859610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:53.101 [2024-07-24 19:04:27.859647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.101 [2024-07-24 19:04:27.859663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:53.101 [2024-07-24 19:04:27.859684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.101 [2024-07-24 19:04:27.859714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:53.101 [2024-07-24 19:04:27.859736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.101 [2024-07-24 19:04:27.859750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:53.101 [2024-07-24 19:04:27.859771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.101 [2024-07-24 19:04:27.859786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:53.101 [2024-07-24 19:04:27.859806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:53.101 [2024-07-24 19:04:27.859821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:53.101 Received shutdown signal, test time was about 32.366305 seconds 00:21:53.101 00:21:53.101 Latency(us) 00:21:53.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.101 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:53.101 Verification LBA range: start 0x0 length 0x4000 00:21:53.101 Nvme0n1 : 32.37 8024.60 31.35 0.00 0.00 15924.63 283.69 4026531.84 00:21:53.101 =================================================================================================================== 00:21:53.101 Total : 8024.60 31.35 0.00 0.00 15924.63 283.69 4026531.84 00:21:53.101 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:53.359 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:21:53.359 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:53.359 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:21:53.359 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:53.359 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:21:53.359 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:53.359 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:21:53.359 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:53.359 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:53.359 rmmod nvme_tcp 00:21:53.359 rmmod nvme_fabrics 00:21:53.359 rmmod nvme_keyring 00:21:53.617 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:53.617 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:21:53.617 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:21:53.617 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3218575 ']' 00:21:53.617 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3218575 00:21:53.617 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3218575 ']' 00:21:53.617 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3218575 00:21:53.617 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:21:53.617 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:53.617 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3218575 00:21:53.617 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:53.617 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:53.617 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3218575' 00:21:53.617 killing process with pid 3218575 00:21:53.617 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3218575 00:21:53.617 19:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3218575 00:21:53.875 19:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:53.875 19:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:53.875 19:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:53.875 19:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:53.875 19:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:53.875 19:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.875 19:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.875 19:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.774 19:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:55.774 00:21:55.774 real 0m41.821s 00:21:55.774 user 2m4.334s 00:21:55.774 sys 0m11.200s 00:21:55.774 19:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:55.774 19:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:55.774 ************************************ 00:21:55.774 END TEST nvmf_host_multipath_status 00:21:55.774 ************************************ 00:21:55.774 19:04:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:55.774 19:04:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:55.774 19:04:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:55.774 19:04:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.031 ************************************ 00:21:56.031 START TEST nvmf_discovery_remove_ifc 00:21:56.031 ************************************ 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:56.031 * Looking for test storage... 00:21:56.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:56.031 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:56.032 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:56.032 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:56.032 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:56.032 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:56.032 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:56.032 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:56.032 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:56.032 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:56.032 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.032 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:56.032 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:56.032 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:56.032 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.032 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:56.032 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.032 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:56.032 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:56.032 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:21:56.032 19:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:57.930 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:57.930 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:57.930 Found net devices under 0000:09:00.0: cvl_0_0 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:57.930 Found net devices under 0000:09:00.1: cvl_0_1 00:21:57.930 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.931 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:57.931 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:21:57.931 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:57.931 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:57.931 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:57.931 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.931 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.931 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.931 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:57.931 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:57.931 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:57.931 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:57.931 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:57.931 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.931 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:57.931 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:57.931 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:57.931 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:58.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:21:58.189 00:21:58.189 --- 10.0.0.2 ping statistics --- 00:21:58.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.189 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:21:58.189 00:21:58.189 --- 10.0.0.1 ping statistics --- 00:21:58.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.189 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3225077 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3225077 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3225077 ']' 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:58.189 19:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:58.189 [2024-07-24 19:04:35.708341] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:21:58.189 [2024-07-24 19:04:35.708429] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.189 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.189 [2024-07-24 19:04:35.775258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.447 [2024-07-24 19:04:35.891205] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.447 [2024-07-24 19:04:35.891266] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.447 [2024-07-24 19:04:35.891283] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.447 [2024-07-24 19:04:35.891296] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.447 [2024-07-24 19:04:35.891307] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.447 [2024-07-24 19:04:35.891338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.379 19:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:59.379 19:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:21:59.379 19:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:59.379 19:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:59.379 19:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:59.379 19:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.379 19:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:21:59.379 19:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.379 19:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:59.379 [2024-07-24 19:04:36.688179] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.379 [2024-07-24 19:04:36.696322] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:59.379 null0 00:21:59.379 [2024-07-24 19:04:36.728257] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.380 19:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.380 19:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3225229 00:21:59.380 19:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3225229 /tmp/host.sock 00:21:59.380 19:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:21:59.380 19:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3225229 ']' 00:21:59.380 19:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:21:59.380 19:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:59.380 19:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:59.380 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:59.380 19:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:59.380 19:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:59.380 [2024-07-24 19:04:36.795013] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:21:59.380 [2024-07-24 19:04:36.795096] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3225229 ] 00:21:59.380 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.380 [2024-07-24 19:04:36.855579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.380 [2024-07-24 19:04:36.972076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.312 19:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:00.312 19:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:22:00.312 19:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:00.312 19:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:00.312 19:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.312 19:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:00.312 19:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.312 19:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:00.312 19:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.312 19:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:00.312 19:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.312 19:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:00.312 19:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.312 19:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:01.684 [2024-07-24 19:04:38.918975] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:01.684 [2024-07-24 19:04:38.919018] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:01.684 [2024-07-24 19:04:38.919043] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:01.684 [2024-07-24 19:04:39.047511] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:01.684 [2024-07-24 19:04:39.150432] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:01.684 [2024-07-24 19:04:39.150498] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:01.684 [2024-07-24 19:04:39.150545] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:01.684 [2024-07-24 19:04:39.150571] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:01.684 [2024-07-24 19:04:39.150611] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:01.684 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.684 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:01.684 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:01.684 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:01.684 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:01.684 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.684 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:01.684 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:01.684 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:01.684 [2024-07-24 19:04:39.156675] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc068e0 was disconnected and freed. delete nvme_qpair. 00:22:01.684 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.684 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:01.684 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:01.684 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:01.684 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:01.684 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:01.684 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:01.684 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:01.684 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.684 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:01.685 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:01.685 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:01.685 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.942 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:01.942 19:04:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:02.874 19:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:02.874 19:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:02.874 19:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:02.874 19:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.874 19:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:02.874 19:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:02.874 19:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:02.874 19:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.874 19:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:02.874 19:04:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:03.807 19:04:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:03.807 19:04:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.807 19:04:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:03.807 19:04:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.807 19:04:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:03.807 19:04:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:03.807 19:04:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:03.807 19:04:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.807 19:04:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:03.807 19:04:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:05.178 19:04:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:05.178 19:04:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.178 19:04:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:05.178 19:04:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.178 19:04:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:05.178 19:04:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:05.178 19:04:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:05.178 19:04:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.178 19:04:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:05.178 19:04:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:06.109 19:04:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:06.109 19:04:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:06.109 19:04:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:06.109 19:04:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.109 19:04:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:06.109 19:04:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:06.109 19:04:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:06.109 19:04:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.109 19:04:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:06.109 19:04:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:07.041 19:04:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:07.041 19:04:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:07.041 19:04:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:07.041 19:04:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.041 19:04:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:07.041 19:04:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:07.041 19:04:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:07.041 19:04:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.041 19:04:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:07.041 19:04:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:07.041 [2024-07-24 19:04:44.591252] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:07.041 [2024-07-24 19:04:44.591327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.041 [2024-07-24 19:04:44.591348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.041 [2024-07-24 19:04:44.591366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.041 [2024-07-24 19:04:44.591378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.041 [2024-07-24 19:04:44.591392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.041 [2024-07-24 19:04:44.591408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.041 [2024-07-24 19:04:44.591422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.041 [2024-07-24 19:04:44.591451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.041 [2024-07-24 19:04:44.591467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.041 [2024-07-24 19:04:44.591482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.041 [2024-07-24 19:04:44.591497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcd320 is same with the state(5) to be set 00:22:07.041 [2024-07-24 19:04:44.601270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcd320 (9): Bad file descriptor 00:22:07.041 [2024-07-24 19:04:44.611314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:07.973 19:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:07.973 19:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:07.973 19:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.973 19:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:07.973 19:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:07.973 19:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:07.973 19:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:08.230 [2024-07-24 19:04:45.650158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:08.230 [2024-07-24 19:04:45.650221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcd320 with addr=10.0.0.2, port=4420 00:22:08.230 [2024-07-24 19:04:45.650254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcd320 is same with the state(5) to be set 00:22:08.230 [2024-07-24 19:04:45.650302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcd320 (9): Bad file descriptor 00:22:08.230 [2024-07-24 19:04:45.650756] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:08.230 [2024-07-24 19:04:45.650798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:08.230 [2024-07-24 19:04:45.650815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:08.230 [2024-07-24 19:04:45.650830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:08.230 [2024-07-24 19:04:45.650859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:08.230 [2024-07-24 19:04:45.650876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:08.230 19:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.230 19:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:08.230 19:04:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:09.161 [2024-07-24 19:04:46.653367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:09.161 [2024-07-24 19:04:46.653396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:09.161 [2024-07-24 19:04:46.653411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:09.161 [2024-07-24 19:04:46.653426] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:09.161 [2024-07-24 19:04:46.653446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:09.161 [2024-07-24 19:04:46.653480] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:09.161 [2024-07-24 19:04:46.653517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.161 [2024-07-24 19:04:46.653539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.161 [2024-07-24 19:04:46.653557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.161 [2024-07-24 19:04:46.653571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.161 [2024-07-24 19:04:46.653586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.161 [2024-07-24 19:04:46.653599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.161 [2024-07-24 19:04:46.653614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.161 [2024-07-24 19:04:46.653627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.161 [2024-07-24 19:04:46.653642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.161 [2024-07-24 19:04:46.653658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.161 [2024-07-24 19:04:46.653673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:09.161 [2024-07-24 19:04:46.653873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcc780 (9): Bad file descriptor 00:22:09.161 [2024-07-24 19:04:46.654891] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:09.161 [2024-07-24 19:04:46.654915] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:09.161 19:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:09.161 19:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.161 19:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:09.161 19:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.161 19:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:09.161 19:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:09.161 19:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:09.161 19:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.161 19:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:09.161 19:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.161 19:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:09.161 19:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:09.161 19:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:09.161 19:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.161 19:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:09.161 19:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.161 19:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:09.161 19:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:09.161 19:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:09.161 19:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.419 19:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:09.419 19:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:10.351 19:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:10.351 19:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:10.351 19:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:10.351 19:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.351 19:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:10.351 19:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:10.351 19:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:10.351 19:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.351 19:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:10.351 19:04:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:11.283 [2024-07-24 19:04:48.715015] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:11.283 [2024-07-24 19:04:48.715054] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:11.283 [2024-07-24 19:04:48.715081] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:11.283 19:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:11.283 19:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.283 19:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:11.283 19:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.283 19:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:11.283 19:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:11.283 19:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:11.283 [2024-07-24 19:04:48.842519] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:11.283 19:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.283 19:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:11.283 19:04:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:11.545 [2024-07-24 19:04:49.026856] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:11.545 [2024-07-24 19:04:49.026908] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:11.545 [2024-07-24 19:04:49.026946] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:11.545 [2024-07-24 19:04:49.026971] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:11.545 [2024-07-24 19:04:49.026986] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:11.545 [2024-07-24 19:04:49.032827] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xbd3120 was disconnected and freed. delete nvme_qpair. 00:22:12.517 19:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:12.517 19:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.517 19:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:12.517 19:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.517 19:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:12.517 19:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:12.517 19:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:12.517 19:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.518 19:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:12.518 19:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:12.518 19:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3225229 00:22:12.518 19:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3225229 ']' 00:22:12.518 19:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3225229 00:22:12.518 19:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:22:12.518 19:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:12.518 19:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3225229 00:22:12.518 19:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:12.518 19:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:12.518 19:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3225229' 00:22:12.518 killing process with pid 3225229 00:22:12.518 19:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3225229 00:22:12.518 19:04:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3225229 00:22:12.776 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:12.776 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:12.776 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:12.776 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:12.776 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:12.776 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:12.776 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:12.776 rmmod nvme_tcp 00:22:12.776 rmmod nvme_fabrics 00:22:12.776 rmmod nvme_keyring 00:22:12.776 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:12.776 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:12.776 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:12.776 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3225077 ']' 00:22:12.776 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3225077 00:22:12.776 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3225077 ']' 00:22:12.776 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3225077 00:22:12.776 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:22:12.776 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:12.776 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3225077 00:22:12.776 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:12.776 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:12.776 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3225077' 00:22:12.776 killing process with pid 3225077 00:22:12.776 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3225077 00:22:12.776 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3225077 00:22:13.033 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:13.033 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:13.033 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:13.033 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:13.033 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:13.033 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.033 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.034 19:04:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:15.564 00:22:15.564 real 0m19.268s 00:22:15.564 user 0m28.347s 00:22:15.564 sys 0m3.177s 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:15.564 ************************************ 00:22:15.564 END TEST nvmf_discovery_remove_ifc 00:22:15.564 ************************************ 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.564 ************************************ 00:22:15.564 START TEST nvmf_identify_kernel_target 00:22:15.564 ************************************ 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:15.564 * Looking for test storage... 00:22:15.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.564 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:15.565 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:15.565 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:15.565 19:04:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:17.462 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:17.462 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:17.462 Found net devices under 0000:09:00.0: cvl_0_0 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:17.462 Found net devices under 0000:09:00.1: cvl_0_1 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:17.462 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:17.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:22:17.463 00:22:17.463 --- 10.0.0.2 ping statistics --- 00:22:17.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.463 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:22:17.463 00:22:17.463 --- 10.0.0.1 ping statistics --- 00:22:17.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.463 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:17.463 19:04:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:18.396 Waiting for block devices as requested 00:22:18.396 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:18.396 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:18.654 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:18.654 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:18.654 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:18.654 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:18.912 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:18.912 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:18.912 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:22:19.169 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:19.169 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:19.169 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:19.169 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:19.426 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:19.426 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:19.426 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:19.426 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:19.684 No valid GPT data, bailing 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:19.684 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:22:19.943 00:22:19.943 Discovery Log Number of Records 2, Generation counter 2 00:22:19.943 =====Discovery Log Entry 0====== 00:22:19.943 trtype: tcp 00:22:19.943 adrfam: ipv4 00:22:19.943 subtype: current discovery subsystem 00:22:19.943 treq: not specified, sq flow control disable supported 00:22:19.943 portid: 1 00:22:19.943 trsvcid: 4420 00:22:19.943 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:19.943 traddr: 10.0.0.1 00:22:19.943 eflags: none 00:22:19.943 sectype: none 00:22:19.943 =====Discovery Log Entry 1====== 00:22:19.943 trtype: tcp 00:22:19.943 adrfam: ipv4 00:22:19.943 subtype: nvme subsystem 00:22:19.943 treq: not specified, sq flow control disable supported 00:22:19.943 portid: 1 00:22:19.943 trsvcid: 4420 00:22:19.943 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:19.943 traddr: 10.0.0.1 00:22:19.943 eflags: none 00:22:19.943 sectype: none 00:22:19.943 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:19.943 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:19.943 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.943 ===================================================== 00:22:19.943 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:19.943 ===================================================== 00:22:19.943 Controller Capabilities/Features 00:22:19.943 ================================ 00:22:19.943 Vendor ID: 0000 00:22:19.943 Subsystem Vendor ID: 0000 00:22:19.943 Serial Number: c036b330ed78e171debc 00:22:19.943 Model Number: Linux 00:22:19.943 Firmware Version: 6.7.0-68 00:22:19.943 Recommended Arb Burst: 0 00:22:19.943 IEEE OUI Identifier: 00 00 00 00:22:19.943 Multi-path I/O 00:22:19.944 May have multiple subsystem ports: No 00:22:19.944 May have multiple controllers: No 00:22:19.944 Associated with SR-IOV VF: No 00:22:19.944 Max Data Transfer Size: Unlimited 00:22:19.944 Max Number of Namespaces: 0 00:22:19.944 Max Number of I/O Queues: 1024 00:22:19.944 NVMe Specification Version (VS): 1.3 00:22:19.944 NVMe Specification Version (Identify): 1.3 00:22:19.944 Maximum Queue Entries: 1024 00:22:19.944 Contiguous Queues Required: No 00:22:19.944 Arbitration Mechanisms Supported 00:22:19.944 Weighted Round Robin: Not Supported 00:22:19.944 Vendor Specific: Not Supported 00:22:19.944 Reset Timeout: 7500 ms 00:22:19.944 Doorbell Stride: 4 bytes 00:22:19.944 NVM Subsystem Reset: Not Supported 00:22:19.944 Command Sets Supported 00:22:19.944 NVM Command Set: Supported 00:22:19.944 Boot Partition: Not Supported 00:22:19.944 Memory Page Size Minimum: 4096 bytes 00:22:19.944 Memory Page Size Maximum: 4096 bytes 00:22:19.944 Persistent Memory Region: Not Supported 00:22:19.944 Optional Asynchronous Events Supported 00:22:19.944 Namespace Attribute Notices: Not Supported 00:22:19.944 Firmware Activation Notices: Not Supported 00:22:19.944 ANA Change Notices: Not Supported 00:22:19.944 PLE Aggregate Log Change Notices: Not Supported 00:22:19.944 LBA Status Info Alert Notices: Not Supported 00:22:19.944 EGE Aggregate Log Change Notices: Not Supported 00:22:19.944 Normal NVM Subsystem Shutdown event: Not Supported 00:22:19.944 Zone Descriptor Change Notices: Not Supported 00:22:19.944 Discovery Log Change Notices: Supported 00:22:19.944 Controller Attributes 00:22:19.944 128-bit Host Identifier: Not Supported 00:22:19.944 Non-Operational Permissive Mode: Not Supported 00:22:19.944 NVM Sets: Not Supported 00:22:19.944 Read Recovery Levels: Not Supported 00:22:19.944 Endurance Groups: Not Supported 00:22:19.944 Predictable Latency Mode: Not Supported 00:22:19.944 Traffic Based Keep ALive: Not Supported 00:22:19.944 Namespace Granularity: Not Supported 00:22:19.944 SQ Associations: Not Supported 00:22:19.944 UUID List: Not Supported 00:22:19.944 Multi-Domain Subsystem: Not Supported 00:22:19.944 Fixed Capacity Management: Not Supported 00:22:19.944 Variable Capacity Management: Not Supported 00:22:19.944 Delete Endurance Group: Not Supported 00:22:19.944 Delete NVM Set: Not Supported 00:22:19.944 Extended LBA Formats Supported: Not Supported 00:22:19.944 Flexible Data Placement Supported: Not Supported 00:22:19.944 00:22:19.944 Controller Memory Buffer Support 00:22:19.944 ================================ 00:22:19.944 Supported: No 00:22:19.944 00:22:19.944 Persistent Memory Region Support 00:22:19.944 ================================ 00:22:19.944 Supported: No 00:22:19.944 00:22:19.944 Admin Command Set Attributes 00:22:19.944 ============================ 00:22:19.944 Security Send/Receive: Not Supported 00:22:19.944 Format NVM: Not Supported 00:22:19.944 Firmware Activate/Download: Not Supported 00:22:19.944 Namespace Management: Not Supported 00:22:19.944 Device Self-Test: Not Supported 00:22:19.944 Directives: Not Supported 00:22:19.944 NVMe-MI: Not Supported 00:22:19.944 Virtualization Management: Not Supported 00:22:19.944 Doorbell Buffer Config: Not Supported 00:22:19.944 Get LBA Status Capability: Not Supported 00:22:19.944 Command & Feature Lockdown Capability: Not Supported 00:22:19.944 Abort Command Limit: 1 00:22:19.944 Async Event Request Limit: 1 00:22:19.944 Number of Firmware Slots: N/A 00:22:19.944 Firmware Slot 1 Read-Only: N/A 00:22:19.944 Firmware Activation Without Reset: N/A 00:22:19.944 Multiple Update Detection Support: N/A 00:22:19.944 Firmware Update Granularity: No Information Provided 00:22:19.944 Per-Namespace SMART Log: No 00:22:19.944 Asymmetric Namespace Access Log Page: Not Supported 00:22:19.944 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:19.944 Command Effects Log Page: Not Supported 00:22:19.944 Get Log Page Extended Data: Supported 00:22:19.944 Telemetry Log Pages: Not Supported 00:22:19.944 Persistent Event Log Pages: Not Supported 00:22:19.944 Supported Log Pages Log Page: May Support 00:22:19.944 Commands Supported & Effects Log Page: Not Supported 00:22:19.944 Feature Identifiers & Effects Log Page:May Support 00:22:19.944 NVMe-MI Commands & Effects Log Page: May Support 00:22:19.944 Data Area 4 for Telemetry Log: Not Supported 00:22:19.944 Error Log Page Entries Supported: 1 00:22:19.944 Keep Alive: Not Supported 00:22:19.944 00:22:19.944 NVM Command Set Attributes 00:22:19.944 ========================== 00:22:19.944 Submission Queue Entry Size 00:22:19.944 Max: 1 00:22:19.944 Min: 1 00:22:19.944 Completion Queue Entry Size 00:22:19.944 Max: 1 00:22:19.944 Min: 1 00:22:19.944 Number of Namespaces: 0 00:22:19.944 Compare Command: Not Supported 00:22:19.944 Write Uncorrectable Command: Not Supported 00:22:19.944 Dataset Management Command: Not Supported 00:22:19.944 Write Zeroes Command: Not Supported 00:22:19.944 Set Features Save Field: Not Supported 00:22:19.944 Reservations: Not Supported 00:22:19.944 Timestamp: Not Supported 00:22:19.944 Copy: Not Supported 00:22:19.944 Volatile Write Cache: Not Present 00:22:19.944 Atomic Write Unit (Normal): 1 00:22:19.944 Atomic Write Unit (PFail): 1 00:22:19.944 Atomic Compare & Write Unit: 1 00:22:19.944 Fused Compare & Write: Not Supported 00:22:19.944 Scatter-Gather List 00:22:19.944 SGL Command Set: Supported 00:22:19.944 SGL Keyed: Not Supported 00:22:19.944 SGL Bit Bucket Descriptor: Not Supported 00:22:19.944 SGL Metadata Pointer: Not Supported 00:22:19.944 Oversized SGL: Not Supported 00:22:19.944 SGL Metadata Address: Not Supported 00:22:19.944 SGL Offset: Supported 00:22:19.944 Transport SGL Data Block: Not Supported 00:22:19.944 Replay Protected Memory Block: Not Supported 00:22:19.944 00:22:19.944 Firmware Slot Information 00:22:19.944 ========================= 00:22:19.944 Active slot: 0 00:22:19.944 00:22:19.944 00:22:19.944 Error Log 00:22:19.944 ========= 00:22:19.944 00:22:19.944 Active Namespaces 00:22:19.944 ================= 00:22:19.944 Discovery Log Page 00:22:19.944 ================== 00:22:19.944 Generation Counter: 2 00:22:19.944 Number of Records: 2 00:22:19.944 Record Format: 0 00:22:19.944 00:22:19.944 Discovery Log Entry 0 00:22:19.944 ---------------------- 00:22:19.944 Transport Type: 3 (TCP) 00:22:19.944 Address Family: 1 (IPv4) 00:22:19.944 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:19.944 Entry Flags: 00:22:19.944 Duplicate Returned Information: 0 00:22:19.944 Explicit Persistent Connection Support for Discovery: 0 00:22:19.944 Transport Requirements: 00:22:19.944 Secure Channel: Not Specified 00:22:19.944 Port ID: 1 (0x0001) 00:22:19.944 Controller ID: 65535 (0xffff) 00:22:19.944 Admin Max SQ Size: 32 00:22:19.944 Transport Service Identifier: 4420 00:22:19.944 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:19.944 Transport Address: 10.0.0.1 00:22:19.944 Discovery Log Entry 1 00:22:19.944 ---------------------- 00:22:19.944 Transport Type: 3 (TCP) 00:22:19.944 Address Family: 1 (IPv4) 00:22:19.944 Subsystem Type: 2 (NVM Subsystem) 00:22:19.944 Entry Flags: 00:22:19.944 Duplicate Returned Information: 0 00:22:19.944 Explicit Persistent Connection Support for Discovery: 0 00:22:19.944 Transport Requirements: 00:22:19.944 Secure Channel: Not Specified 00:22:19.944 Port ID: 1 (0x0001) 00:22:19.944 Controller ID: 65535 (0xffff) 00:22:19.944 Admin Max SQ Size: 32 00:22:19.944 Transport Service Identifier: 4420 00:22:19.944 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:19.944 Transport Address: 10.0.0.1 00:22:19.944 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:19.944 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.944 get_feature(0x01) failed 00:22:19.944 get_feature(0x02) failed 00:22:19.944 get_feature(0x04) failed 00:22:19.944 ===================================================== 00:22:19.944 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:19.944 ===================================================== 00:22:19.944 Controller Capabilities/Features 00:22:19.944 ================================ 00:22:19.944 Vendor ID: 0000 00:22:19.944 Subsystem Vendor ID: 0000 00:22:19.944 Serial Number: 88bf774485ab23277d84 00:22:19.944 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:19.944 Firmware Version: 6.7.0-68 00:22:19.944 Recommended Arb Burst: 6 00:22:19.944 IEEE OUI Identifier: 00 00 00 00:22:19.944 Multi-path I/O 00:22:19.945 May have multiple subsystem ports: Yes 00:22:19.945 May have multiple controllers: Yes 00:22:19.945 Associated with SR-IOV VF: No 00:22:19.945 Max Data Transfer Size: Unlimited 00:22:19.945 Max Number of Namespaces: 1024 00:22:19.945 Max Number of I/O Queues: 128 00:22:19.945 NVMe Specification Version (VS): 1.3 00:22:19.945 NVMe Specification Version (Identify): 1.3 00:22:19.945 Maximum Queue Entries: 1024 00:22:19.945 Contiguous Queues Required: No 00:22:19.945 Arbitration Mechanisms Supported 00:22:19.945 Weighted Round Robin: Not Supported 00:22:19.945 Vendor Specific: Not Supported 00:22:19.945 Reset Timeout: 7500 ms 00:22:19.945 Doorbell Stride: 4 bytes 00:22:19.945 NVM Subsystem Reset: Not Supported 00:22:19.945 Command Sets Supported 00:22:19.945 NVM Command Set: Supported 00:22:19.945 Boot Partition: Not Supported 00:22:19.945 Memory Page Size Minimum: 4096 bytes 00:22:19.945 Memory Page Size Maximum: 4096 bytes 00:22:19.945 Persistent Memory Region: Not Supported 00:22:19.945 Optional Asynchronous Events Supported 00:22:19.945 Namespace Attribute Notices: Supported 00:22:19.945 Firmware Activation Notices: Not Supported 00:22:19.945 ANA Change Notices: Supported 00:22:19.945 PLE Aggregate Log Change Notices: Not Supported 00:22:19.945 LBA Status Info Alert Notices: Not Supported 00:22:19.945 EGE Aggregate Log Change Notices: Not Supported 00:22:19.945 Normal NVM Subsystem Shutdown event: Not Supported 00:22:19.945 Zone Descriptor Change Notices: Not Supported 00:22:19.945 Discovery Log Change Notices: Not Supported 00:22:19.945 Controller Attributes 00:22:19.945 128-bit Host Identifier: Supported 00:22:19.945 Non-Operational Permissive Mode: Not Supported 00:22:19.945 NVM Sets: Not Supported 00:22:19.945 Read Recovery Levels: Not Supported 00:22:19.945 Endurance Groups: Not Supported 00:22:19.945 Predictable Latency Mode: Not Supported 00:22:19.945 Traffic Based Keep ALive: Supported 00:22:19.945 Namespace Granularity: Not Supported 00:22:19.945 SQ Associations: Not Supported 00:22:19.945 UUID List: Not Supported 00:22:19.945 Multi-Domain Subsystem: Not Supported 00:22:19.945 Fixed Capacity Management: Not Supported 00:22:19.945 Variable Capacity Management: Not Supported 00:22:19.945 Delete Endurance Group: Not Supported 00:22:19.945 Delete NVM Set: Not Supported 00:22:19.945 Extended LBA Formats Supported: Not Supported 00:22:19.945 Flexible Data Placement Supported: Not Supported 00:22:19.945 00:22:19.945 Controller Memory Buffer Support 00:22:19.945 ================================ 00:22:19.945 Supported: No 00:22:19.945 00:22:19.945 Persistent Memory Region Support 00:22:19.945 ================================ 00:22:19.945 Supported: No 00:22:19.945 00:22:19.945 Admin Command Set Attributes 00:22:19.945 ============================ 00:22:19.945 Security Send/Receive: Not Supported 00:22:19.945 Format NVM: Not Supported 00:22:19.945 Firmware Activate/Download: Not Supported 00:22:19.945 Namespace Management: Not Supported 00:22:19.945 Device Self-Test: Not Supported 00:22:19.945 Directives: Not Supported 00:22:19.945 NVMe-MI: Not Supported 00:22:19.945 Virtualization Management: Not Supported 00:22:19.945 Doorbell Buffer Config: Not Supported 00:22:19.945 Get LBA Status Capability: Not Supported 00:22:19.945 Command & Feature Lockdown Capability: Not Supported 00:22:19.945 Abort Command Limit: 4 00:22:19.945 Async Event Request Limit: 4 00:22:19.945 Number of Firmware Slots: N/A 00:22:19.945 Firmware Slot 1 Read-Only: N/A 00:22:19.945 Firmware Activation Without Reset: N/A 00:22:19.945 Multiple Update Detection Support: N/A 00:22:19.945 Firmware Update Granularity: No Information Provided 00:22:19.945 Per-Namespace SMART Log: Yes 00:22:19.945 Asymmetric Namespace Access Log Page: Supported 00:22:19.945 ANA Transition Time : 10 sec 00:22:19.945 00:22:19.945 Asymmetric Namespace Access Capabilities 00:22:19.945 ANA Optimized State : Supported 00:22:19.945 ANA Non-Optimized State : Supported 00:22:19.945 ANA Inaccessible State : Supported 00:22:19.945 ANA Persistent Loss State : Supported 00:22:19.945 ANA Change State : Supported 00:22:19.945 ANAGRPID is not changed : No 00:22:19.945 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:19.945 00:22:19.945 ANA Group Identifier Maximum : 128 00:22:19.945 Number of ANA Group Identifiers : 128 00:22:19.945 Max Number of Allowed Namespaces : 1024 00:22:19.945 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:19.945 Command Effects Log Page: Supported 00:22:19.945 Get Log Page Extended Data: Supported 00:22:19.945 Telemetry Log Pages: Not Supported 00:22:19.945 Persistent Event Log Pages: Not Supported 00:22:19.945 Supported Log Pages Log Page: May Support 00:22:19.945 Commands Supported & Effects Log Page: Not Supported 00:22:19.945 Feature Identifiers & Effects Log Page:May Support 00:22:19.945 NVMe-MI Commands & Effects Log Page: May Support 00:22:19.945 Data Area 4 for Telemetry Log: Not Supported 00:22:19.945 Error Log Page Entries Supported: 128 00:22:19.945 Keep Alive: Supported 00:22:19.945 Keep Alive Granularity: 1000 ms 00:22:19.945 00:22:19.945 NVM Command Set Attributes 00:22:19.945 ========================== 00:22:19.945 Submission Queue Entry Size 00:22:19.945 Max: 64 00:22:19.945 Min: 64 00:22:19.945 Completion Queue Entry Size 00:22:19.945 Max: 16 00:22:19.945 Min: 16 00:22:19.945 Number of Namespaces: 1024 00:22:19.945 Compare Command: Not Supported 00:22:19.945 Write Uncorrectable Command: Not Supported 00:22:19.945 Dataset Management Command: Supported 00:22:19.945 Write Zeroes Command: Supported 00:22:19.945 Set Features Save Field: Not Supported 00:22:19.945 Reservations: Not Supported 00:22:19.945 Timestamp: Not Supported 00:22:19.945 Copy: Not Supported 00:22:19.945 Volatile Write Cache: Present 00:22:19.945 Atomic Write Unit (Normal): 1 00:22:19.945 Atomic Write Unit (PFail): 1 00:22:19.945 Atomic Compare & Write Unit: 1 00:22:19.945 Fused Compare & Write: Not Supported 00:22:19.945 Scatter-Gather List 00:22:19.945 SGL Command Set: Supported 00:22:19.945 SGL Keyed: Not Supported 00:22:19.945 SGL Bit Bucket Descriptor: Not Supported 00:22:19.945 SGL Metadata Pointer: Not Supported 00:22:19.945 Oversized SGL: Not Supported 00:22:19.945 SGL Metadata Address: Not Supported 00:22:19.945 SGL Offset: Supported 00:22:19.945 Transport SGL Data Block: Not Supported 00:22:19.945 Replay Protected Memory Block: Not Supported 00:22:19.945 00:22:19.945 Firmware Slot Information 00:22:19.945 ========================= 00:22:19.945 Active slot: 0 00:22:19.945 00:22:19.945 Asymmetric Namespace Access 00:22:19.945 =========================== 00:22:19.945 Change Count : 0 00:22:19.945 Number of ANA Group Descriptors : 1 00:22:19.945 ANA Group Descriptor : 0 00:22:19.945 ANA Group ID : 1 00:22:19.945 Number of NSID Values : 1 00:22:19.945 Change Count : 0 00:22:19.945 ANA State : 1 00:22:19.945 Namespace Identifier : 1 00:22:19.945 00:22:19.945 Commands Supported and Effects 00:22:19.945 ============================== 00:22:19.945 Admin Commands 00:22:19.945 -------------- 00:22:19.945 Get Log Page (02h): Supported 00:22:19.945 Identify (06h): Supported 00:22:19.945 Abort (08h): Supported 00:22:19.945 Set Features (09h): Supported 00:22:19.945 Get Features (0Ah): Supported 00:22:19.945 Asynchronous Event Request (0Ch): Supported 00:22:19.945 Keep Alive (18h): Supported 00:22:19.945 I/O Commands 00:22:19.945 ------------ 00:22:19.945 Flush (00h): Supported 00:22:19.945 Write (01h): Supported LBA-Change 00:22:19.945 Read (02h): Supported 00:22:19.945 Write Zeroes (08h): Supported LBA-Change 00:22:19.945 Dataset Management (09h): Supported 00:22:19.945 00:22:19.945 Error Log 00:22:19.945 ========= 00:22:19.945 Entry: 0 00:22:19.945 Error Count: 0x3 00:22:19.945 Submission Queue Id: 0x0 00:22:19.945 Command Id: 0x5 00:22:19.945 Phase Bit: 0 00:22:19.945 Status Code: 0x2 00:22:19.945 Status Code Type: 0x0 00:22:19.945 Do Not Retry: 1 00:22:19.945 Error Location: 0x28 00:22:19.945 LBA: 0x0 00:22:19.945 Namespace: 0x0 00:22:19.945 Vendor Log Page: 0x0 00:22:19.945 ----------- 00:22:19.945 Entry: 1 00:22:19.945 Error Count: 0x2 00:22:19.946 Submission Queue Id: 0x0 00:22:19.946 Command Id: 0x5 00:22:19.946 Phase Bit: 0 00:22:19.946 Status Code: 0x2 00:22:19.946 Status Code Type: 0x0 00:22:19.946 Do Not Retry: 1 00:22:19.946 Error Location: 0x28 00:22:19.946 LBA: 0x0 00:22:19.946 Namespace: 0x0 00:22:19.946 Vendor Log Page: 0x0 00:22:19.946 ----------- 00:22:19.946 Entry: 2 00:22:19.946 Error Count: 0x1 00:22:19.946 Submission Queue Id: 0x0 00:22:19.946 Command Id: 0x4 00:22:19.946 Phase Bit: 0 00:22:19.946 Status Code: 0x2 00:22:19.946 Status Code Type: 0x0 00:22:19.946 Do Not Retry: 1 00:22:19.946 Error Location: 0x28 00:22:19.946 LBA: 0x0 00:22:19.946 Namespace: 0x0 00:22:19.946 Vendor Log Page: 0x0 00:22:19.946 00:22:19.946 Number of Queues 00:22:19.946 ================ 00:22:19.946 Number of I/O Submission Queues: 128 00:22:19.946 Number of I/O Completion Queues: 128 00:22:19.946 00:22:19.946 ZNS Specific Controller Data 00:22:19.946 ============================ 00:22:19.946 Zone Append Size Limit: 0 00:22:19.946 00:22:19.946 00:22:19.946 Active Namespaces 00:22:19.946 ================= 00:22:19.946 get_feature(0x05) failed 00:22:19.946 Namespace ID:1 00:22:19.946 Command Set Identifier: NVM (00h) 00:22:19.946 Deallocate: Supported 00:22:19.946 Deallocated/Unwritten Error: Not Supported 00:22:19.946 Deallocated Read Value: Unknown 00:22:19.946 Deallocate in Write Zeroes: Not Supported 00:22:19.946 Deallocated Guard Field: 0xFFFF 00:22:19.946 Flush: Supported 00:22:19.946 Reservation: Not Supported 00:22:19.946 Namespace Sharing Capabilities: Multiple Controllers 00:22:19.946 Size (in LBAs): 1953525168 (931GiB) 00:22:19.946 Capacity (in LBAs): 1953525168 (931GiB) 00:22:19.946 Utilization (in LBAs): 1953525168 (931GiB) 00:22:19.946 UUID: b7a4df53-d738-4728-a3f6-5730e599d54c 00:22:19.946 Thin Provisioning: Not Supported 00:22:19.946 Per-NS Atomic Units: Yes 00:22:19.946 Atomic Boundary Size (Normal): 0 00:22:19.946 Atomic Boundary Size (PFail): 0 00:22:19.946 Atomic Boundary Offset: 0 00:22:19.946 NGUID/EUI64 Never Reused: No 00:22:19.946 ANA group ID: 1 00:22:19.946 Namespace Write Protected: No 00:22:19.946 Number of LBA Formats: 1 00:22:19.946 Current LBA Format: LBA Format #00 00:22:19.946 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:19.946 00:22:19.946 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:19.946 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:19.946 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:22:19.946 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:19.946 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:22:19.946 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:19.946 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:19.946 rmmod nvme_tcp 00:22:19.946 rmmod nvme_fabrics 00:22:20.205 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:20.205 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:22:20.205 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:22:20.205 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:20.205 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:20.205 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:20.205 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:20.205 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:20.205 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:20.205 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.205 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.205 19:04:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.106 19:04:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:22.106 19:04:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:22.106 19:04:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:22.106 19:04:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:22:22.106 19:04:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:22.106 19:04:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:22.106 19:04:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:22.106 19:04:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:22.106 19:04:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:22.106 19:04:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:22:22.106 19:04:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:23.480 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:23.480 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:23.480 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:23.480 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:23.480 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:23.480 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:23.480 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:23.480 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:23.480 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:23.480 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:23.480 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:23.480 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:23.480 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:23.480 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:23.480 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:23.480 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:24.414 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:22:24.672 00:22:24.672 real 0m9.349s 00:22:24.672 user 0m1.937s 00:22:24.672 sys 0m3.295s 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.672 ************************************ 00:22:24.672 END TEST nvmf_identify_kernel_target 00:22:24.672 ************************************ 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.672 ************************************ 00:22:24.672 START TEST nvmf_auth_host 00:22:24.672 ************************************ 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:24.672 * Looking for test storage... 00:22:24.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:24.672 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:24.673 19:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:26.571 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:26.571 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:26.571 Found net devices under 0000:09:00.0: cvl_0_0 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:26.571 Found net devices under 0000:09:00.1: cvl_0_1 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:26.571 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:26.572 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:26.572 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.572 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.572 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.572 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:26.572 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:26.572 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:26.572 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:26.572 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:26.572 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.572 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:26.572 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:26.572 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:26.572 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:26.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:22:26.830 00:22:26.830 --- 10.0.0.2 ping statistics --- 00:22:26.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.830 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:22:26.830 00:22:26.830 --- 10.0.0.1 ping statistics --- 00:22:26.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.830 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3232401 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3232401 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3232401 ']' 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:26.830 19:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0908504378de3287479aff6ebd7a2884 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.4D6 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0908504378de3287479aff6ebd7a2884 0 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0908504378de3287479aff6ebd7a2884 0 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0908504378de3287479aff6ebd7a2884 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:27.763 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.4D6 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.4D6 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.4D6 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=760fd58a842fd023fa9a28c1abf952ee4e46a426cb481e6165b17dca873609ab 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.sl7 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 760fd58a842fd023fa9a28c1abf952ee4e46a426cb481e6165b17dca873609ab 3 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 760fd58a842fd023fa9a28c1abf952ee4e46a426cb481e6165b17dca873609ab 3 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=760fd58a842fd023fa9a28c1abf952ee4e46a426cb481e6165b17dca873609ab 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.sl7 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.sl7 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.sl7 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=29287b7239ca6ee58d9bb80aba161b0022c16b9b9797f9e4 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.K9T 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 29287b7239ca6ee58d9bb80aba161b0022c16b9b9797f9e4 0 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 29287b7239ca6ee58d9bb80aba161b0022c16b9b9797f9e4 0 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=29287b7239ca6ee58d9bb80aba161b0022c16b9b9797f9e4 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.K9T 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.K9T 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.K9T 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5ebbb619428d1efb540b956161e87aa879168d2f37e709e4 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.n3O 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5ebbb619428d1efb540b956161e87aa879168d2f37e709e4 2 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5ebbb619428d1efb540b956161e87aa879168d2f37e709e4 2 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5ebbb619428d1efb540b956161e87aa879168d2f37e709e4 00:22:28.021 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.n3O 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.n3O 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.n3O 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=417f2d47130795c9765af53b128df2a1 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.fp9 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 417f2d47130795c9765af53b128df2a1 1 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 417f2d47130795c9765af53b128df2a1 1 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=417f2d47130795c9765af53b128df2a1 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.fp9 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.fp9 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.fp9 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=22784eb4960144c23f34ac45688e8f41 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Kbq 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 22784eb4960144c23f34ac45688e8f41 1 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 22784eb4960144c23f34ac45688e8f41 1 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=22784eb4960144c23f34ac45688e8f41 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:28.022 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Kbq 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Kbq 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Kbq 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9614410e970291856c343438435604a4dcd19bc9db981334 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Y6B 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9614410e970291856c343438435604a4dcd19bc9db981334 2 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9614410e970291856c343438435604a4dcd19bc9db981334 2 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9614410e970291856c343438435604a4dcd19bc9db981334 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Y6B 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Y6B 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Y6B 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=718aebcc4b5cbc036597351845ad0673 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Gat 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 718aebcc4b5cbc036597351845ad0673 0 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 718aebcc4b5cbc036597351845ad0673 0 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=718aebcc4b5cbc036597351845ad0673 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Gat 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Gat 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Gat 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ce3e457e9f63ae7a5748d18d7d545a5d679042a76b46a60f1ffbff4e397ba760 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.TmE 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ce3e457e9f63ae7a5748d18d7d545a5d679042a76b46a60f1ffbff4e397ba760 3 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ce3e457e9f63ae7a5748d18d7d545a5d679042a76b46a60f1ffbff4e397ba760 3 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ce3e457e9f63ae7a5748d18d7d545a5d679042a76b46a60f1ffbff4e397ba760 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.TmE 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.TmE 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.TmE 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3232401 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3232401 ']' 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:28.279 19:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.536 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:28.536 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:22:28.536 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:28.536 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4D6 00:22:28.536 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.536 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.537 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.537 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.sl7 ]] 00:22:28.537 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sl7 00:22:28.537 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.537 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.K9T 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.n3O ]] 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.n3O 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.fp9 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Kbq ]] 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Kbq 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Y6B 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Gat ]] 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Gat 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.TmE 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:28.795 19:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:29.730 Waiting for block devices as requested 00:22:29.730 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:29.988 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:29.988 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:29.988 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:30.246 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:30.246 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:30.246 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:30.246 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:30.505 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:22:30.505 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:30.763 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:30.763 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:30.763 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:30.763 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:30.763 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:31.022 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:31.022 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:31.280 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:31.280 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:31.280 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:31.280 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:31.280 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:31.280 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:31.280 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:31.280 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:31.280 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:31.538 No valid GPT data, bailing 00:22:31.538 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:31.538 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:22:31.538 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:22:31.538 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:31.538 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:31.538 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:31.538 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:31.538 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:31.538 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:22:31.538 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:22:31.538 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:31.538 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:22:31.538 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:31.538 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:22:31.538 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:22:31.538 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:22:31.538 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:31.538 19:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:22:31.538 00:22:31.538 Discovery Log Number of Records 2, Generation counter 2 00:22:31.538 =====Discovery Log Entry 0====== 00:22:31.538 trtype: tcp 00:22:31.538 adrfam: ipv4 00:22:31.538 subtype: current discovery subsystem 00:22:31.538 treq: not specified, sq flow control disable supported 00:22:31.538 portid: 1 00:22:31.538 trsvcid: 4420 00:22:31.538 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:31.538 traddr: 10.0.0.1 00:22:31.538 eflags: none 00:22:31.538 sectype: none 00:22:31.538 =====Discovery Log Entry 1====== 00:22:31.538 trtype: tcp 00:22:31.538 adrfam: ipv4 00:22:31.538 subtype: nvme subsystem 00:22:31.538 treq: not specified, sq flow control disable supported 00:22:31.538 portid: 1 00:22:31.538 trsvcid: 4420 00:22:31.538 subnqn: nqn.2024-02.io.spdk:cnode0 00:22:31.538 traddr: 10.0.0.1 00:22:31.538 eflags: none 00:22:31.538 sectype: none 00:22:31.538 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:31.538 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:22:31.538 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:31.538 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:31.538 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:31.538 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:31.538 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:31.538 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:31.538 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:31.538 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:31.538 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:31.538 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:31.538 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:31.538 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: ]] 00:22:31.538 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:31.538 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:31.538 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:22:31.538 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:31.538 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:31.538 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.539 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.817 nvme0n1 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:31.817 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: ]] 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.818 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.090 nvme0n1 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: ]] 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:32.090 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:32.091 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:32.091 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:32.091 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.091 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.091 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:32.091 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.091 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:32.091 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:32.091 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:32.091 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.091 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.091 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.091 nvme0n1 00:22:32.091 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.091 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.091 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.091 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:32.091 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.091 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: ]] 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.350 nvme0n1 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.350 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: ]] 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.351 19:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.609 nvme0n1 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.609 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.867 nvme0n1 00:22:32.867 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: ]] 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.868 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.126 nvme0n1 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: ]] 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.127 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.386 nvme0n1 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: ]] 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.386 19:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.644 nvme0n1 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: ]] 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:33.644 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.645 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.902 nvme0n1 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:33.902 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.903 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.161 nvme0n1 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: ]] 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.161 19:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.726 nvme0n1 00:22:34.726 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.726 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.726 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.726 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.726 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.726 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.726 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: ]] 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.727 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.985 nvme0n1 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: ]] 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.985 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.243 nvme0n1 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: ]] 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.243 19:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.501 nvme0n1 00:22:35.501 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.501 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.501 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.501 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.501 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.759 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.759 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.759 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.759 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.759 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.759 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.759 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.759 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:22:35.759 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.759 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:35.759 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:35.759 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:35.759 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:35.759 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:35.759 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:35.759 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.760 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.017 nvme0n1 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: ]] 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.017 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:36.018 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:36.018 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:36.018 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:36.018 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.018 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.018 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:36.018 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.018 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:36.018 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:36.018 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:36.018 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.018 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.018 19:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.583 nvme0n1 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: ]] 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:36.583 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:36.584 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.584 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.584 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:36.584 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.584 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:36.584 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:36.584 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:36.584 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.584 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.584 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.149 nvme0n1 00:22:37.149 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.149 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.149 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.149 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.149 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:37.149 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.149 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.149 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.149 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.149 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: ]] 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.407 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:37.408 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:37.408 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:37.408 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:37.408 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:37.408 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:37.408 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.408 19:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.973 nvme0n1 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: ]] 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.973 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.539 nvme0n1 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:38.539 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:38.540 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.540 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.540 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:38.540 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:38.540 19:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:38.540 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:38.540 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:38.540 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:38.540 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.540 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.105 nvme0n1 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: ]] 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:39.105 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:39.106 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.106 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.106 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:39.106 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.106 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:39.106 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:39.106 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:39.106 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.106 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.106 19:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.039 nvme0n1 00:22:40.039 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.039 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.039 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.039 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:40.039 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.039 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.039 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.039 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.039 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.039 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: ]] 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.297 19:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.231 nvme0n1 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: ]] 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.231 19:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.164 nvme0n1 00:22:42.164 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.164 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.164 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.164 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.164 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.164 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.164 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.164 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.164 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: ]] 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.165 19:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.098 nvme0n1 00:22:43.098 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.098 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.098 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.098 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.098 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:43.356 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:43.357 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:43.357 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.357 19:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.292 nvme0n1 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: ]] 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.292 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.553 nvme0n1 00:22:44.553 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.553 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.553 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.553 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.553 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.553 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.553 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.553 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.553 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.553 19:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: ]] 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.553 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.811 nvme0n1 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: ]] 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:44.811 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:44.812 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.812 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:44.812 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.812 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.812 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.812 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.812 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:44.812 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:44.812 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:44.812 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.812 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.812 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:44.812 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.812 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:44.812 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:44.812 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:44.812 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.812 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.812 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.069 nvme0n1 00:22:45.069 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.069 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: ]] 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.070 nvme0n1 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.070 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.328 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.328 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.329 nvme0n1 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.329 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: ]] 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.587 19:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.587 nvme0n1 00:22:45.587 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.587 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.587 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.587 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.587 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.587 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.845 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.845 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.845 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.845 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.845 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: ]] 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.846 nvme0n1 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.846 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: ]] 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:46.104 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:46.105 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.105 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.105 nvme0n1 00:22:46.105 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.105 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.105 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.105 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.105 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: ]] 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:46.362 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:46.363 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:46.363 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.363 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.363 nvme0n1 00:22:46.363 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.363 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.363 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.363 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.363 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.620 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.620 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.620 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.620 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.620 19:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.620 nvme0n1 00:22:46.620 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: ]] 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.878 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.136 nvme0n1 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: ]] 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.136 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.394 nvme0n1 00:22:47.394 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.394 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.394 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.394 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.394 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.394 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.652 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.652 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.652 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.652 19:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: ]] 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.652 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.653 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.653 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:47.653 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:47.653 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:47.653 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.653 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.653 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:47.653 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:47.653 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:47.653 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:47.653 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:47.653 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.653 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.653 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.911 nvme0n1 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: ]] 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:47.911 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:47.912 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:47.912 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:47.912 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.912 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.171 nvme0n1 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.171 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.463 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.463 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:48.463 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:48.463 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:48.463 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:48.463 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.463 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.463 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:48.463 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.463 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:48.463 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:48.463 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:48.463 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:48.463 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.463 19:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.722 nvme0n1 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: ]] 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.722 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.289 nvme0n1 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: ]] 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.289 19:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.856 nvme0n1 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: ]] 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.856 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.422 nvme0n1 00:22:50.422 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.422 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.422 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.422 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:50.422 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.422 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.422 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.422 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.422 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.422 19:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: ]] 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.422 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.680 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:50.680 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:50.680 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:50.680 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:50.680 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:50.680 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:50.680 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.680 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.245 nvme0n1 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.245 19:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.811 nvme0n1 00:22:51.811 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.811 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.811 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:51.811 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.811 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.811 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.811 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.811 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:51.811 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.811 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.811 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.811 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: ]] 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.812 19:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.747 nvme0n1 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: ]] 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.747 19:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.681 nvme0n1 00:22:53.681 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.681 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.681 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:53.681 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.681 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: ]] 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.940 19:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.873 nvme0n1 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: ]] 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:22:54.873 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:54.874 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:54.874 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:54.874 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:54.874 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:54.874 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:54.874 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.874 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.874 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.874 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:54.874 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:54.874 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:54.874 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:54.874 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:54.874 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:54.874 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:54.874 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:54.874 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:54.874 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:54.874 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:54.874 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:54.874 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.874 19:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.806 nvme0n1 00:22:55.806 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.806 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:55.806 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.806 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:55.806 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.806 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.806 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.806 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:55.806 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.806 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.063 19:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.995 nvme0n1 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: ]] 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:56.995 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:56.996 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:56.996 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:56.996 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:56.996 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:56.996 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:56.996 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:56.996 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:56.996 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.996 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.254 nvme0n1 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: ]] 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.254 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:57.255 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:57.255 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:57.255 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:57.255 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:57.255 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.255 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.255 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.513 nvme0n1 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: ]] 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:57.513 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:57.514 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:57.514 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:57.514 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.514 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.514 19:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.514 nvme0n1 00:22:57.514 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.514 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.514 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.514 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:57.514 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: ]] 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.772 nvme0n1 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.772 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.031 nvme0n1 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: ]] 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.031 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.289 nvme0n1 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: ]] 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:58.289 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:58.547 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:58.547 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:58.547 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:58.547 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:58.547 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.547 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.547 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.547 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:58.547 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:58.547 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.547 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.547 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.547 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.547 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:58.547 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.547 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:58.547 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:58.547 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:58.547 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.547 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.547 19:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.547 nvme0n1 00:22:58.547 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.547 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.547 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.547 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.547 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:58.547 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.547 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.547 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.547 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.547 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: ]] 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.805 nvme0n1 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.805 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.806 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: ]] 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.063 nvme0n1 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.063 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.321 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.321 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.321 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.321 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.321 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.321 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:59.321 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:22:59.321 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.321 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:59.321 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:59.321 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:59.321 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:59.321 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.322 nvme0n1 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.322 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: ]] 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:59.580 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:59.581 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.581 19:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.839 nvme0n1 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: ]] 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.839 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.097 nvme0n1 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: ]] 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.097 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.098 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.098 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.098 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.098 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.098 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.098 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.098 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.098 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.098 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.098 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.098 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.098 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:00.355 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.355 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.613 nvme0n1 00:23:00.613 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.613 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.613 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.613 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.613 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.613 19:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: ]] 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:00.613 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.614 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.872 nvme0n1 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.872 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.129 nvme0n1 00:23:01.129 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.129 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.129 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.129 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.129 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.129 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: ]] 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.387 19:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.952 nvme0n1 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: ]] 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:01.952 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:01.953 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.953 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:01.953 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.953 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.953 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.953 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.953 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.953 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.953 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.953 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.953 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.953 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.953 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.953 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.953 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.953 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.953 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:01.953 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.953 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.518 nvme0n1 00:23:02.518 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.518 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.518 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.518 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.518 19:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: ]] 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:02.518 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.519 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:02.519 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.519 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.519 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.519 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.519 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.519 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.519 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.519 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.519 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.519 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:02.519 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.519 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:02.519 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:02.519 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:02.519 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:02.519 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.519 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.084 nvme0n1 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: ]] 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.084 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.342 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.342 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.342 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.342 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.342 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.342 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.342 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.342 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:03.342 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.342 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:03.342 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:03.342 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:03.342 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:03.342 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.342 19:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.907 nvme0n1 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.907 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.473 nvme0n1 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDkwODUwNDM3OGRlMzI4NzQ3OWFmZjZlYmQ3YTI4ODR+6QFP: 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: ]] 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzYwZmQ1OGE4NDJmZDAyM2ZhOWEyOGMxYWJmOTUyZWU0ZTQ2YTQyNmNiNDgxZTYxNjViMTdkY2E4NzM2MDlhYpoXyZc=: 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.473 19:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.405 nvme0n1 00:23:05.405 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.405 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.405 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.405 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.405 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.405 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.405 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: ]] 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.406 19:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.794 nvme0n1 00:23:06.794 19:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.794 19:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.794 19:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.794 19:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.794 19:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.794 19:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE3ZjJkNDcxMzA3OTVjOTc2NWFmNTNiMTI4ZGYyYTHbkR3v: 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: ]] 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjI3ODRlYjQ5NjAxNDRjMjNmMzRhYzQ1Njg4ZThmNDFcPd6s: 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.794 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.359 nvme0n1 00:23:07.359 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.359 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.359 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.359 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.359 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.617 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.617 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.617 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.617 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.617 19:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTYxNDQxMGU5NzAyOTE4NTZjMzQzNDM4NDM1NjA0YTRkY2QxOWJjOWRiOTgxMzM02ulJWw==: 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: ]] 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE4YWViY2M0YjVjYmMwMzY1OTczNTE4NDVhZDA2NzNTUNf2: 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.617 19:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.549 nvme0n1 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UzZTQ1N2U5ZjYzYWU3YTU3NDhkMThkN2Q1NDVhNWQ2NzkwNDJhNzZiNDZhNjBmMWZmYmZmNGUzOTdiYTc2MM1osWc=: 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.549 19:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.482 nvme0n1 00:23:09.482 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.482 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.482 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.482 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.482 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.482 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjkyODdiNzIzOWNhNmVlNThkOWJiODBhYmExNjFiMDAyMmMxNmI5Yjk3OTdmOWU0ISNUTw==: 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: ]] 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWViYmI2MTk0MjhkMWVmYjU0MGI5NTYxNjFlODdhYTg3OTE2OGQyZjM3ZTcwOWU0l/WcTw==: 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.740 request: 00:23:09.740 { 00:23:09.740 "name": "nvme0", 00:23:09.740 "trtype": "tcp", 00:23:09.740 "traddr": "10.0.0.1", 00:23:09.740 "adrfam": "ipv4", 00:23:09.740 "trsvcid": "4420", 00:23:09.740 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:09.740 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:09.740 "prchk_reftag": false, 00:23:09.740 "prchk_guard": false, 00:23:09.740 "hdgst": false, 00:23:09.740 "ddgst": false, 00:23:09.740 "method": "bdev_nvme_attach_controller", 00:23:09.740 "req_id": 1 00:23:09.740 } 00:23:09.740 Got JSON-RPC error response 00:23:09.740 response: 00:23:09.740 { 00:23:09.740 "code": -5, 00:23:09.740 "message": "Input/output error" 00:23:09.740 } 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.740 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.741 request: 00:23:09.741 { 00:23:09.741 "name": "nvme0", 00:23:09.741 "trtype": "tcp", 00:23:09.741 "traddr": "10.0.0.1", 00:23:09.741 "adrfam": "ipv4", 00:23:09.741 "trsvcid": "4420", 00:23:09.741 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:09.741 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:09.741 "prchk_reftag": false, 00:23:09.741 "prchk_guard": false, 00:23:09.741 "hdgst": false, 00:23:09.741 "ddgst": false, 00:23:09.741 "dhchap_key": "key2", 00:23:09.741 "method": "bdev_nvme_attach_controller", 00:23:09.741 "req_id": 1 00:23:09.741 } 00:23:09.741 Got JSON-RPC error response 00:23:09.741 response: 00:23:09.741 { 00:23:09.741 "code": -5, 00:23:09.741 "message": "Input/output error" 00:23:09.741 } 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.741 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.998 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.998 request: 00:23:09.998 { 00:23:09.998 "name": "nvme0", 00:23:09.998 "trtype": "tcp", 00:23:09.998 "traddr": "10.0.0.1", 00:23:09.998 "adrfam": "ipv4", 00:23:09.998 "trsvcid": "4420", 00:23:09.998 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:09.998 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:09.998 "prchk_reftag": false, 00:23:09.998 "prchk_guard": false, 00:23:09.998 "hdgst": false, 00:23:09.998 "ddgst": false, 00:23:09.998 "dhchap_key": "key1", 00:23:09.998 "dhchap_ctrlr_key": "ckey2", 00:23:09.998 "method": "bdev_nvme_attach_controller", 00:23:09.999 "req_id": 1 00:23:09.999 } 00:23:09.999 Got JSON-RPC error response 00:23:09.999 response: 00:23:09.999 { 00:23:09.999 "code": -5, 00:23:09.999 "message": "Input/output error" 00:23:09.999 } 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:09.999 rmmod nvme_tcp 00:23:09.999 rmmod nvme_fabrics 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3232401 ']' 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3232401 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 3232401 ']' 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 3232401 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3232401 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3232401' 00:23:09.999 killing process with pid 3232401 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 3232401 00:23:09.999 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 3232401 00:23:10.258 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:10.258 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:10.258 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:10.258 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:10.258 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:10.258 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.258 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:10.258 19:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.791 19:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:12.791 19:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:12.791 19:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:12.791 19:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:12.791 19:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:12.791 19:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:12.791 19:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:12.791 19:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:12.791 19:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:12.791 19:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:12.791 19:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:12.791 19:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:12.791 19:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:13.726 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:13.726 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:13.726 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:13.726 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:13.726 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:13.726 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:13.726 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:13.726 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:13.726 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:13.726 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:13.726 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:13.726 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:13.726 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:13.726 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:13.726 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:13.726 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:14.662 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:23:14.662 19:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.4D6 /tmp/spdk.key-null.K9T /tmp/spdk.key-sha256.fp9 /tmp/spdk.key-sha384.Y6B /tmp/spdk.key-sha512.TmE /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:14.662 19:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:16.036 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:16.036 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:16.036 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:16.036 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:16.036 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:16.036 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:16.036 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:16.036 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:16.036 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:16.036 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:16.036 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:16.036 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:16.036 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:16.036 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:16.036 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:16.036 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:16.036 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:16.036 00:23:16.036 real 0m51.449s 00:23:16.036 user 0m48.748s 00:23:16.036 sys 0m5.873s 00:23:16.036 19:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:16.036 19:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.036 ************************************ 00:23:16.036 END TEST nvmf_auth_host 00:23:16.036 ************************************ 00:23:16.036 19:05:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:23:16.036 19:05:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:16.036 19:05:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:16.036 19:05:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:16.036 19:05:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.036 ************************************ 00:23:16.036 START TEST nvmf_digest 00:23:16.036 ************************************ 00:23:16.036 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:16.036 * Looking for test storage... 00:23:16.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:23:16.294 19:05:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:18.195 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:18.195 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:18.195 Found net devices under 0000:09:00.0: cvl_0_0 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:18.195 Found net devices under 0000:09:00.1: cvl_0_1 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:18.195 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:18.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:23:18.196 00:23:18.196 --- 10.0.0.2 ping statistics --- 00:23:18.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.196 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:18.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:23:18.196 00:23:18.196 --- 10.0.0.1 ping statistics --- 00:23:18.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.196 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:18.196 ************************************ 00:23:18.196 START TEST nvmf_digest_clean 00:23:18.196 ************************************ 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3242019 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3242019 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3242019 ']' 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:18.196 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:18.196 [2024-07-24 19:05:55.784727] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:23:18.196 [2024-07-24 19:05:55.784814] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.455 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.455 [2024-07-24 19:05:55.849076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.455 [2024-07-24 19:05:55.953731] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.455 [2024-07-24 19:05:55.953794] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.455 [2024-07-24 19:05:55.953823] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.455 [2024-07-24 19:05:55.953834] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.455 [2024-07-24 19:05:55.953843] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.455 [2024-07-24 19:05:55.953869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.455 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:18.455 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:18.455 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:18.455 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:18.455 19:05:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:18.455 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.455 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:23:18.455 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:23:18.455 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:23:18.455 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.455 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:18.714 null0 00:23:18.714 [2024-07-24 19:05:56.121913] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.714 [2024-07-24 19:05:56.146137] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.714 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.714 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:23:18.714 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:18.714 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:18.714 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:18.714 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:18.714 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:18.714 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:18.714 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3242043 00:23:18.714 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:18.714 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3242043 /var/tmp/bperf.sock 00:23:18.714 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3242043 ']' 00:23:18.714 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:18.714 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:18.714 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:18.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:18.714 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:18.714 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:18.714 [2024-07-24 19:05:56.190076] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:23:18.714 [2024-07-24 19:05:56.190171] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3242043 ] 00:23:18.714 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.714 [2024-07-24 19:05:56.252334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.972 [2024-07-24 19:05:56.370312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.972 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:18.972 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:18.972 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:18.972 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:18.972 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:19.230 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:19.230 19:05:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:19.796 nvme0n1 00:23:19.796 19:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:19.796 19:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:19.796 Running I/O for 2 seconds... 00:23:22.323 00:23:22.323 Latency(us) 00:23:22.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.323 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:22.323 nvme0n1 : 2.05 12669.84 49.49 0.00 0.00 9891.15 4854.52 50486.99 00:23:22.323 =================================================================================================================== 00:23:22.323 Total : 12669.84 49.49 0.00 0.00 9891.15 4854.52 50486.99 00:23:22.323 0 00:23:22.323 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:22.323 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:22.323 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:22.323 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:22.323 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:22.323 | select(.opcode=="crc32c") 00:23:22.323 | "\(.module_name) \(.executed)"' 00:23:22.323 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:22.323 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:22.323 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:22.323 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:22.323 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3242043 00:23:22.323 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3242043 ']' 00:23:22.323 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3242043 00:23:22.323 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:22.323 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:22.323 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3242043 00:23:22.323 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:22.323 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:22.323 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3242043' 00:23:22.323 killing process with pid 3242043 00:23:22.323 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3242043 00:23:22.323 Received shutdown signal, test time was about 2.000000 seconds 00:23:22.323 00:23:22.323 Latency(us) 00:23:22.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.323 =================================================================================================================== 00:23:22.323 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:22.323 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3242043 00:23:22.581 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:23:22.581 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:22.581 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:22.581 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:22.581 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:22.581 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:22.581 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:22.581 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3242455 00:23:22.581 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:22.582 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3242455 /var/tmp/bperf.sock 00:23:22.582 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3242455 ']' 00:23:22.582 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:22.582 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:22.582 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:22.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:22.582 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:22.582 19:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:22.582 [2024-07-24 19:05:59.970614] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:23:22.582 [2024-07-24 19:05:59.970695] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3242455 ] 00:23:22.582 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:22.582 Zero copy mechanism will not be used. 00:23:22.582 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.582 [2024-07-24 19:06:00.030912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.582 [2024-07-24 19:06:00.145585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.839 19:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:22.839 19:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:22.839 19:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:22.839 19:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:22.839 19:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:23.097 19:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:23.097 19:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:23.663 nvme0n1 00:23:23.663 19:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:23.663 19:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:23.663 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:23.663 Zero copy mechanism will not be used. 00:23:23.663 Running I/O for 2 seconds... 00:23:25.562 00:23:25.562 Latency(us) 00:23:25.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.562 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:25.562 nvme0n1 : 2.00 3423.08 427.89 0.00 0.00 4669.92 1353.20 12427.57 00:23:25.562 =================================================================================================================== 00:23:25.562 Total : 3423.08 427.89 0.00 0.00 4669.92 1353.20 12427.57 00:23:25.562 0 00:23:25.562 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:25.562 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:25.562 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:25.562 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:25.562 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:25.562 | select(.opcode=="crc32c") 00:23:25.562 | "\(.module_name) \(.executed)"' 00:23:25.821 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:25.821 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:25.821 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:25.821 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:25.821 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3242455 00:23:25.821 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3242455 ']' 00:23:25.821 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3242455 00:23:25.821 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:25.821 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:25.821 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3242455 00:23:25.821 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:25.821 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:25.821 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3242455' 00:23:25.821 killing process with pid 3242455 00:23:25.821 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3242455 00:23:25.821 Received shutdown signal, test time was about 2.000000 seconds 00:23:25.821 00:23:25.821 Latency(us) 00:23:25.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.821 =================================================================================================================== 00:23:25.821 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:25.821 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3242455 00:23:26.078 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:23:26.078 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:26.078 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:26.078 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:26.078 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:26.078 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:26.078 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:26.078 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3243093 00:23:26.078 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3243093 /var/tmp/bperf.sock 00:23:26.078 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:26.078 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3243093 ']' 00:23:26.078 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:26.078 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:26.079 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:26.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:26.079 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:26.079 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:26.336 [2024-07-24 19:06:03.700893] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:23:26.336 [2024-07-24 19:06:03.700973] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3243093 ] 00:23:26.336 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.336 [2024-07-24 19:06:03.758496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.336 [2024-07-24 19:06:03.866500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.336 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:26.336 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:26.336 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:26.336 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:26.336 19:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:26.988 19:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:26.988 19:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:27.247 nvme0n1 00:23:27.247 19:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:27.247 19:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:27.247 Running I/O for 2 seconds... 00:23:29.775 00:23:29.775 Latency(us) 00:23:29.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.775 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:29.775 nvme0n1 : 2.00 21217.45 82.88 0.00 0.00 6022.09 2645.71 12913.02 00:23:29.775 =================================================================================================================== 00:23:29.775 Total : 21217.45 82.88 0.00 0.00 6022.09 2645.71 12913.02 00:23:29.775 0 00:23:29.775 19:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:29.775 19:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:29.775 19:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:29.775 19:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:29.775 19:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:29.775 | select(.opcode=="crc32c") 00:23:29.775 | "\(.module_name) \(.executed)"' 00:23:29.775 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:29.775 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:29.775 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:29.775 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:29.775 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3243093 00:23:29.775 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3243093 ']' 00:23:29.775 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3243093 00:23:29.775 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:29.775 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.775 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3243093 00:23:29.775 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:29.775 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:29.775 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3243093' 00:23:29.775 killing process with pid 3243093 00:23:29.775 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3243093 00:23:29.775 Received shutdown signal, test time was about 2.000000 seconds 00:23:29.775 00:23:29.775 Latency(us) 00:23:29.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.775 =================================================================================================================== 00:23:29.775 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:29.775 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3243093 00:23:30.033 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:23:30.033 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:30.033 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:30.033 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:30.033 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:30.033 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:30.033 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:30.033 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3243635 00:23:30.033 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3243635 /var/tmp/bperf.sock 00:23:30.033 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:30.033 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3243635 ']' 00:23:30.033 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:30.033 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:30.033 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:30.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:30.033 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:30.033 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:30.033 [2024-07-24 19:06:07.439100] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:23:30.033 [2024-07-24 19:06:07.439205] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3243635 ] 00:23:30.033 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:30.033 Zero copy mechanism will not be used. 00:23:30.033 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.034 [2024-07-24 19:06:07.498321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.034 [2024-07-24 19:06:07.609718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.291 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:30.291 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:30.291 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:30.291 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:30.291 19:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:30.549 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:30.549 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:30.807 nvme0n1 00:23:30.807 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:30.807 19:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:31.065 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:31.065 Zero copy mechanism will not be used. 00:23:31.065 Running I/O for 2 seconds... 00:23:32.960 00:23:32.960 Latency(us) 00:23:32.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.960 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:32.960 nvme0n1 : 2.01 2458.82 307.35 0.00 0.00 6491.98 4781.70 14272.28 00:23:32.960 =================================================================================================================== 00:23:32.960 Total : 2458.82 307.35 0.00 0.00 6491.98 4781.70 14272.28 00:23:32.960 0 00:23:32.960 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:32.960 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:32.960 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:32.960 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:32.960 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:32.960 | select(.opcode=="crc32c") 00:23:32.960 | "\(.module_name) \(.executed)"' 00:23:33.216 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:33.216 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:33.216 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:33.216 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:33.216 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3243635 00:23:33.216 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3243635 ']' 00:23:33.216 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3243635 00:23:33.216 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:33.216 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:33.216 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3243635 00:23:33.216 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:33.216 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:33.216 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3243635' 00:23:33.216 killing process with pid 3243635 00:23:33.216 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3243635 00:23:33.216 Received shutdown signal, test time was about 2.000000 seconds 00:23:33.216 00:23:33.216 Latency(us) 00:23:33.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.216 =================================================================================================================== 00:23:33.216 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:33.216 19:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3243635 00:23:33.473 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3242019 00:23:33.473 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3242019 ']' 00:23:33.473 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3242019 00:23:33.473 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:33.473 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:33.473 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3242019 00:23:33.473 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:33.473 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:33.473 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3242019' 00:23:33.473 killing process with pid 3242019 00:23:33.473 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3242019 00:23:33.473 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3242019 00:23:33.729 00:23:33.729 real 0m15.566s 00:23:33.729 user 0m30.011s 00:23:33.729 sys 0m4.345s 00:23:33.729 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:33.729 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:33.729 ************************************ 00:23:33.729 END TEST nvmf_digest_clean 00:23:33.729 ************************************ 00:23:33.729 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:23:33.729 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:33.729 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:33.729 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:33.987 ************************************ 00:23:33.987 START TEST nvmf_digest_error 00:23:33.987 ************************************ 00:23:33.987 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:23:33.987 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:23:33.987 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:33.987 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:33.987 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:33.987 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3244444 00:23:33.987 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:33.987 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3244444 00:23:33.987 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3244444 ']' 00:23:33.987 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.987 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:33.987 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.987 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:33.987 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:33.987 [2024-07-24 19:06:11.399664] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:23:33.987 [2024-07-24 19:06:11.399756] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.987 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.987 [2024-07-24 19:06:11.463888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.987 [2024-07-24 19:06:11.570412] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.987 [2024-07-24 19:06:11.570469] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.987 [2024-07-24 19:06:11.570498] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.987 [2024-07-24 19:06:11.570510] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.987 [2024-07-24 19:06:11.570520] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.987 [2024-07-24 19:06:11.570548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.244 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:34.244 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:34.244 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:34.244 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:34.244 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:34.244 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.244 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:23:34.244 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.244 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:34.244 [2024-07-24 19:06:11.631111] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:23:34.244 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.244 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:23:34.244 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:23:34.244 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.244 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:34.244 null0 00:23:34.244 [2024-07-24 19:06:11.755072] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.244 [2024-07-24 19:06:11.779313] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.244 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.245 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:23:34.245 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:34.245 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:34.245 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:34.245 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:34.245 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3244589 00:23:34.245 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:23:34.245 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3244589 /var/tmp/bperf.sock 00:23:34.245 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3244589 ']' 00:23:34.245 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:34.245 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:34.245 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:34.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:34.245 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:34.245 19:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:34.245 [2024-07-24 19:06:11.831062] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:23:34.245 [2024-07-24 19:06:11.831157] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3244589 ] 00:23:34.502 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.502 [2024-07-24 19:06:11.895776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.502 [2024-07-24 19:06:12.008205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.760 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:34.760 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:34.760 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:34.760 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:35.017 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:35.017 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.017 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:35.017 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.017 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:35.017 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:35.274 nvme0n1 00:23:35.274 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:35.274 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.274 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:35.275 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.275 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:35.275 19:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:35.275 Running I/O for 2 seconds... 00:23:35.532 [2024-07-24 19:06:12.886640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.533 [2024-07-24 19:06:12.886689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.533 [2024-07-24 19:06:12.886710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.533 [2024-07-24 19:06:12.902368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.533 [2024-07-24 19:06:12.902418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.533 [2024-07-24 19:06:12.902438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.533 [2024-07-24 19:06:12.913925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.533 [2024-07-24 19:06:12.913961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.533 [2024-07-24 19:06:12.913981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.533 [2024-07-24 19:06:12.928515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.533 [2024-07-24 19:06:12.928550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.533 [2024-07-24 19:06:12.928569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.533 [2024-07-24 19:06:12.943834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.533 [2024-07-24 19:06:12.943870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.533 [2024-07-24 19:06:12.943889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.533 [2024-07-24 19:06:12.958669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.533 [2024-07-24 19:06:12.958703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.533 [2024-07-24 19:06:12.958722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.533 [2024-07-24 19:06:12.971696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.533 [2024-07-24 19:06:12.971730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.533 [2024-07-24 19:06:12.971749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.533 [2024-07-24 19:06:12.983691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.533 [2024-07-24 19:06:12.983724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.533 [2024-07-24 19:06:12.983744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.533 [2024-07-24 19:06:12.998638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.533 [2024-07-24 19:06:12.998672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.533 [2024-07-24 19:06:12.998690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.533 [2024-07-24 19:06:13.014606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.533 [2024-07-24 19:06:13.014641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.533 [2024-07-24 19:06:13.014660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.533 [2024-07-24 19:06:13.026706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.533 [2024-07-24 19:06:13.026740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.533 [2024-07-24 19:06:13.026758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.533 [2024-07-24 19:06:13.043508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.533 [2024-07-24 19:06:13.043543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.533 [2024-07-24 19:06:13.043568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.533 [2024-07-24 19:06:13.057411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.533 [2024-07-24 19:06:13.057460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.533 [2024-07-24 19:06:13.057478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.533 [2024-07-24 19:06:13.073293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.533 [2024-07-24 19:06:13.073324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.533 [2024-07-24 19:06:13.073341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.533 [2024-07-24 19:06:13.086688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.533 [2024-07-24 19:06:13.086721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.533 [2024-07-24 19:06:13.086740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.533 [2024-07-24 19:06:13.101678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.533 [2024-07-24 19:06:13.101711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.533 [2024-07-24 19:06:13.101730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.533 [2024-07-24 19:06:13.114789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.533 [2024-07-24 19:06:13.114822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.533 [2024-07-24 19:06:13.114841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.533 [2024-07-24 19:06:13.128466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.533 [2024-07-24 19:06:13.128499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.533 [2024-07-24 19:06:13.128517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.791 [2024-07-24 19:06:13.141677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.791 [2024-07-24 19:06:13.141713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.791 [2024-07-24 19:06:13.141732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.791 [2024-07-24 19:06:13.154472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.791 [2024-07-24 19:06:13.154519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.791 [2024-07-24 19:06:13.154538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.791 [2024-07-24 19:06:13.169330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.791 [2024-07-24 19:06:13.169359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.791 [2024-07-24 19:06:13.169390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.791 [2024-07-24 19:06:13.182859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.791 [2024-07-24 19:06:13.182893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.791 [2024-07-24 19:06:13.182912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.791 [2024-07-24 19:06:13.196068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.792 [2024-07-24 19:06:13.196109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.792 [2024-07-24 19:06:13.196130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.792 [2024-07-24 19:06:13.210509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.792 [2024-07-24 19:06:13.210543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.792 [2024-07-24 19:06:13.210561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.792 [2024-07-24 19:06:13.224761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.792 [2024-07-24 19:06:13.224794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.792 [2024-07-24 19:06:13.224812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.792 [2024-07-24 19:06:13.237244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.792 [2024-07-24 19:06:13.237288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.792 [2024-07-24 19:06:13.237305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.792 [2024-07-24 19:06:13.253536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.792 [2024-07-24 19:06:13.253570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.792 [2024-07-24 19:06:13.253589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.792 [2024-07-24 19:06:13.266896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.792 [2024-07-24 19:06:13.266930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.792 [2024-07-24 19:06:13.266948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.792 [2024-07-24 19:06:13.279941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.792 [2024-07-24 19:06:13.279973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.792 [2024-07-24 19:06:13.279998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.792 [2024-07-24 19:06:13.292822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.792 [2024-07-24 19:06:13.292856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.792 [2024-07-24 19:06:13.292874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.792 [2024-07-24 19:06:13.307197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.792 [2024-07-24 19:06:13.307226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.792 [2024-07-24 19:06:13.307243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.792 [2024-07-24 19:06:13.323351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.792 [2024-07-24 19:06:13.323381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.792 [2024-07-24 19:06:13.323398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.792 [2024-07-24 19:06:13.335563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.792 [2024-07-24 19:06:13.335597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.792 [2024-07-24 19:06:13.335615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.792 [2024-07-24 19:06:13.351654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.792 [2024-07-24 19:06:13.351689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.792 [2024-07-24 19:06:13.351707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.792 [2024-07-24 19:06:13.363595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.792 [2024-07-24 19:06:13.363629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.792 [2024-07-24 19:06:13.363647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.792 [2024-07-24 19:06:13.379687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.792 [2024-07-24 19:06:13.379720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.792 [2024-07-24 19:06:13.379739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.792 [2024-07-24 19:06:13.391323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:35.792 [2024-07-24 19:06:13.391354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.792 [2024-07-24 19:06:13.391371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.050 [2024-07-24 19:06:13.406018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.050 [2024-07-24 19:06:13.406059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.050 [2024-07-24 19:06:13.406078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.050 [2024-07-24 19:06:13.420560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.050 [2024-07-24 19:06:13.420594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.050 [2024-07-24 19:06:13.420612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.050 [2024-07-24 19:06:13.435380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.050 [2024-07-24 19:06:13.435423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.050 [2024-07-24 19:06:13.435439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.050 [2024-07-24 19:06:13.447895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.050 [2024-07-24 19:06:13.447928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.050 [2024-07-24 19:06:13.447947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.050 [2024-07-24 19:06:13.460775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.050 [2024-07-24 19:06:13.460809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.050 [2024-07-24 19:06:13.460827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.050 [2024-07-24 19:06:13.475738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.050 [2024-07-24 19:06:13.475771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.050 [2024-07-24 19:06:13.475789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.050 [2024-07-24 19:06:13.489671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.050 [2024-07-24 19:06:13.489705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.050 [2024-07-24 19:06:13.489724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.050 [2024-07-24 19:06:13.502134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.050 [2024-07-24 19:06:13.502175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.050 [2024-07-24 19:06:13.502190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.050 [2024-07-24 19:06:13.516702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.050 [2024-07-24 19:06:13.516735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.050 [2024-07-24 19:06:13.516753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.050 [2024-07-24 19:06:13.530047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.050 [2024-07-24 19:06:13.530080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.050 [2024-07-24 19:06:13.530098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.050 [2024-07-24 19:06:13.544183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.050 [2024-07-24 19:06:13.544213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.050 [2024-07-24 19:06:13.544230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.050 [2024-07-24 19:06:13.556151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.050 [2024-07-24 19:06:13.556179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.050 [2024-07-24 19:06:13.556194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.050 [2024-07-24 19:06:13.570119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.050 [2024-07-24 19:06:13.570165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.050 [2024-07-24 19:06:13.570181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.050 [2024-07-24 19:06:13.586081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.050 [2024-07-24 19:06:13.586123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.050 [2024-07-24 19:06:13.586157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.050 [2024-07-24 19:06:13.600265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.050 [2024-07-24 19:06:13.600295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.050 [2024-07-24 19:06:13.600327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.050 [2024-07-24 19:06:13.611938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.050 [2024-07-24 19:06:13.611971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.050 [2024-07-24 19:06:13.611989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.050 [2024-07-24 19:06:13.627680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.050 [2024-07-24 19:06:13.627713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.050 [2024-07-24 19:06:13.627732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.050 [2024-07-24 19:06:13.643593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.050 [2024-07-24 19:06:13.643627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.050 [2024-07-24 19:06:13.643653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.307 [2024-07-24 19:06:13.657912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.307 [2024-07-24 19:06:13.657947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.307 [2024-07-24 19:06:13.657966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.307 [2024-07-24 19:06:13.670446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.307 [2024-07-24 19:06:13.670480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.307 [2024-07-24 19:06:13.670498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.307 [2024-07-24 19:06:13.686279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.307 [2024-07-24 19:06:13.686311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.307 [2024-07-24 19:06:13.686327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.307 [2024-07-24 19:06:13.698869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.307 [2024-07-24 19:06:13.698904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.307 [2024-07-24 19:06:13.698922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.307 [2024-07-24 19:06:13.711459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.307 [2024-07-24 19:06:13.711492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.307 [2024-07-24 19:06:13.711510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.307 [2024-07-24 19:06:13.727709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.307 [2024-07-24 19:06:13.727744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.307 [2024-07-24 19:06:13.727763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.307 [2024-07-24 19:06:13.741264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.307 [2024-07-24 19:06:13.741294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.307 [2024-07-24 19:06:13.741326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.307 [2024-07-24 19:06:13.753790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.307 [2024-07-24 19:06:13.753824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.307 [2024-07-24 19:06:13.753842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.307 [2024-07-24 19:06:13.767084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.307 [2024-07-24 19:06:13.767126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.307 [2024-07-24 19:06:13.767158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.307 [2024-07-24 19:06:13.784852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.307 [2024-07-24 19:06:13.784886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.307 [2024-07-24 19:06:13.784904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.307 [2024-07-24 19:06:13.800997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.307 [2024-07-24 19:06:13.801032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.307 [2024-07-24 19:06:13.801051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.307 [2024-07-24 19:06:13.813840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.307 [2024-07-24 19:06:13.813873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.307 [2024-07-24 19:06:13.813892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.307 [2024-07-24 19:06:13.827046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.307 [2024-07-24 19:06:13.827080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.307 [2024-07-24 19:06:13.827099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.307 [2024-07-24 19:06:13.841592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.307 [2024-07-24 19:06:13.841626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.307 [2024-07-24 19:06:13.841645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.307 [2024-07-24 19:06:13.855975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.307 [2024-07-24 19:06:13.856009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.307 [2024-07-24 19:06:13.856028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.308 [2024-07-24 19:06:13.868134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.308 [2024-07-24 19:06:13.868181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.308 [2024-07-24 19:06:13.868198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.308 [2024-07-24 19:06:13.881764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.308 [2024-07-24 19:06:13.881797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.308 [2024-07-24 19:06:13.881821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.308 [2024-07-24 19:06:13.896564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.308 [2024-07-24 19:06:13.896598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.308 [2024-07-24 19:06:13.896615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.565 [2024-07-24 19:06:13.910034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.565 [2024-07-24 19:06:13.910069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.565 [2024-07-24 19:06:13.910088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.565 [2024-07-24 19:06:13.922733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.565 [2024-07-24 19:06:13.922767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.565 [2024-07-24 19:06:13.922786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.565 [2024-07-24 19:06:13.937350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.565 [2024-07-24 19:06:13.937378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.565 [2024-07-24 19:06:13.937410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.565 [2024-07-24 19:06:13.951940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.565 [2024-07-24 19:06:13.951973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.565 [2024-07-24 19:06:13.951991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.565 [2024-07-24 19:06:13.964460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.566 [2024-07-24 19:06:13.964494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.566 [2024-07-24 19:06:13.964513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.566 [2024-07-24 19:06:13.979149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.566 [2024-07-24 19:06:13.979193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.566 [2024-07-24 19:06:13.979208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.566 [2024-07-24 19:06:13.993803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.566 [2024-07-24 19:06:13.993837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.566 [2024-07-24 19:06:13.993855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.566 [2024-07-24 19:06:14.007082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.566 [2024-07-24 19:06:14.007132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.566 [2024-07-24 19:06:14.007153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.566 [2024-07-24 19:06:14.020767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.566 [2024-07-24 19:06:14.020800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.566 [2024-07-24 19:06:14.020818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.566 [2024-07-24 19:06:14.033976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.566 [2024-07-24 19:06:14.034008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.566 [2024-07-24 19:06:14.034026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.566 [2024-07-24 19:06:14.047377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.566 [2024-07-24 19:06:14.047407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.566 [2024-07-24 19:06:14.047423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.566 [2024-07-24 19:06:14.060618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.566 [2024-07-24 19:06:14.060651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.566 [2024-07-24 19:06:14.060669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.566 [2024-07-24 19:06:14.076686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.566 [2024-07-24 19:06:14.076719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.566 [2024-07-24 19:06:14.076738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.566 [2024-07-24 19:06:14.088591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.566 [2024-07-24 19:06:14.088623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.566 [2024-07-24 19:06:14.088641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.566 [2024-07-24 19:06:14.102536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.566 [2024-07-24 19:06:14.102569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.566 [2024-07-24 19:06:14.102587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.566 [2024-07-24 19:06:14.115075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.566 [2024-07-24 19:06:14.115114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.566 [2024-07-24 19:06:14.115134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.566 [2024-07-24 19:06:14.131785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.566 [2024-07-24 19:06:14.131819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.566 [2024-07-24 19:06:14.131837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.566 [2024-07-24 19:06:14.146248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.566 [2024-07-24 19:06:14.146278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.566 [2024-07-24 19:06:14.146294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.566 [2024-07-24 19:06:14.159032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.566 [2024-07-24 19:06:14.159065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.566 [2024-07-24 19:06:14.159083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.825 [2024-07-24 19:06:14.172975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.825 [2024-07-24 19:06:14.173010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.825 [2024-07-24 19:06:14.173029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.825 [2024-07-24 19:06:14.186667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.825 [2024-07-24 19:06:14.186703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.825 [2024-07-24 19:06:14.186721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.825 [2024-07-24 19:06:14.200027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.825 [2024-07-24 19:06:14.200058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.825 [2024-07-24 19:06:14.200075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.825 [2024-07-24 19:06:14.212539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.825 [2024-07-24 19:06:14.212569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.825 [2024-07-24 19:06:14.212585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.825 [2024-07-24 19:06:14.224850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.825 [2024-07-24 19:06:14.224880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.825 [2024-07-24 19:06:14.224911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.825 [2024-07-24 19:06:14.238016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.825 [2024-07-24 19:06:14.238046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.825 [2024-07-24 19:06:14.238070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.825 [2024-07-24 19:06:14.253157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.825 [2024-07-24 19:06:14.253188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.825 [2024-07-24 19:06:14.253204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.825 [2024-07-24 19:06:14.266864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.825 [2024-07-24 19:06:14.266894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.825 [2024-07-24 19:06:14.266911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.825 [2024-07-24 19:06:14.277581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.825 [2024-07-24 19:06:14.277608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.825 [2024-07-24 19:06:14.277638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.825 [2024-07-24 19:06:14.291568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.825 [2024-07-24 19:06:14.291598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.825 [2024-07-24 19:06:14.291615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.825 [2024-07-24 19:06:14.304146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.825 [2024-07-24 19:06:14.304175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.825 [2024-07-24 19:06:14.304193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.825 [2024-07-24 19:06:14.316666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.825 [2024-07-24 19:06:14.316694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.825 [2024-07-24 19:06:14.316726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.825 [2024-07-24 19:06:14.329850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.825 [2024-07-24 19:06:14.329880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.825 [2024-07-24 19:06:14.329897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.825 [2024-07-24 19:06:14.341328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.825 [2024-07-24 19:06:14.341357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.825 [2024-07-24 19:06:14.341375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.825 [2024-07-24 19:06:14.353972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.825 [2024-07-24 19:06:14.354010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.825 [2024-07-24 19:06:14.354041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.825 [2024-07-24 19:06:14.366309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.825 [2024-07-24 19:06:14.366339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.825 [2024-07-24 19:06:14.366355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.825 [2024-07-24 19:06:14.380051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.826 [2024-07-24 19:06:14.380081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.826 [2024-07-24 19:06:14.380097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.826 [2024-07-24 19:06:14.392538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.826 [2024-07-24 19:06:14.392567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.826 [2024-07-24 19:06:14.392584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.826 [2024-07-24 19:06:14.404966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.826 [2024-07-24 19:06:14.404995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.826 [2024-07-24 19:06:14.405011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.826 [2024-07-24 19:06:14.417911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:36.826 [2024-07-24 19:06:14.417941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.826 [2024-07-24 19:06:14.417958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.084 [2024-07-24 19:06:14.430935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.084 [2024-07-24 19:06:14.430982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.084 [2024-07-24 19:06:14.430999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.084 [2024-07-24 19:06:14.442194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.084 [2024-07-24 19:06:14.442224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.084 [2024-07-24 19:06:14.442240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.084 [2024-07-24 19:06:14.454376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.084 [2024-07-24 19:06:14.454405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.084 [2024-07-24 19:06:14.454437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.084 [2024-07-24 19:06:14.469579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.084 [2024-07-24 19:06:14.469607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.084 [2024-07-24 19:06:14.469637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.084 [2024-07-24 19:06:14.484383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.084 [2024-07-24 19:06:14.484414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.084 [2024-07-24 19:06:14.484430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.084 [2024-07-24 19:06:14.496138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.084 [2024-07-24 19:06:14.496169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.084 [2024-07-24 19:06:14.496185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.084 [2024-07-24 19:06:14.510358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.084 [2024-07-24 19:06:14.510402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.084 [2024-07-24 19:06:14.510418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.084 [2024-07-24 19:06:14.523736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.084 [2024-07-24 19:06:14.523766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.084 [2024-07-24 19:06:14.523782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.084 [2024-07-24 19:06:14.535498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.084 [2024-07-24 19:06:14.535525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.084 [2024-07-24 19:06:14.535555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.084 [2024-07-24 19:06:14.549025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.084 [2024-07-24 19:06:14.549070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.084 [2024-07-24 19:06:14.549087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.084 [2024-07-24 19:06:14.563552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.084 [2024-07-24 19:06:14.563580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.084 [2024-07-24 19:06:14.563611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.084 [2024-07-24 19:06:14.576158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.084 [2024-07-24 19:06:14.576194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.084 [2024-07-24 19:06:14.576211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.084 [2024-07-24 19:06:14.586944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.084 [2024-07-24 19:06:14.586971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.084 [2024-07-24 19:06:14.587001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.085 [2024-07-24 19:06:14.601520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.085 [2024-07-24 19:06:14.601548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.085 [2024-07-24 19:06:14.601580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.085 [2024-07-24 19:06:14.614359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.085 [2024-07-24 19:06:14.614389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.085 [2024-07-24 19:06:14.614405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.085 [2024-07-24 19:06:14.626209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.085 [2024-07-24 19:06:14.626238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.085 [2024-07-24 19:06:14.626254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.085 [2024-07-24 19:06:14.637077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.085 [2024-07-24 19:06:14.637128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.085 [2024-07-24 19:06:14.637160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.085 [2024-07-24 19:06:14.652289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.085 [2024-07-24 19:06:14.652319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.085 [2024-07-24 19:06:14.652336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.085 [2024-07-24 19:06:14.663309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.085 [2024-07-24 19:06:14.663339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.085 [2024-07-24 19:06:14.663355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.085 [2024-07-24 19:06:14.677172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.085 [2024-07-24 19:06:14.677202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.085 [2024-07-24 19:06:14.677220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.343 [2024-07-24 19:06:14.689455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.343 [2024-07-24 19:06:14.689488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.343 [2024-07-24 19:06:14.689505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.343 [2024-07-24 19:06:14.701233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.343 [2024-07-24 19:06:14.701264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.343 [2024-07-24 19:06:14.701281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.343 [2024-07-24 19:06:14.713301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.343 [2024-07-24 19:06:14.713330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.343 [2024-07-24 19:06:14.713362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.343 [2024-07-24 19:06:14.727797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.343 [2024-07-24 19:06:14.727827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.343 [2024-07-24 19:06:14.727844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.343 [2024-07-24 19:06:14.738790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.343 [2024-07-24 19:06:14.738819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.343 [2024-07-24 19:06:14.738851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.343 [2024-07-24 19:06:14.754234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.343 [2024-07-24 19:06:14.754264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.343 [2024-07-24 19:06:14.754281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.343 [2024-07-24 19:06:14.767734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.343 [2024-07-24 19:06:14.767763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.343 [2024-07-24 19:06:14.767794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.343 [2024-07-24 19:06:14.778850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.343 [2024-07-24 19:06:14.778881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.343 [2024-07-24 19:06:14.778898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.343 [2024-07-24 19:06:14.791621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.343 [2024-07-24 19:06:14.791650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.343 [2024-07-24 19:06:14.791689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.343 [2024-07-24 19:06:14.804707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.343 [2024-07-24 19:06:14.804735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.343 [2024-07-24 19:06:14.804765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.343 [2024-07-24 19:06:14.817626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.344 [2024-07-24 19:06:14.817656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.344 [2024-07-24 19:06:14.817672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.344 [2024-07-24 19:06:14.830291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.344 [2024-07-24 19:06:14.830320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.344 [2024-07-24 19:06:14.830337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.344 [2024-07-24 19:06:14.842900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.344 [2024-07-24 19:06:14.842929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.344 [2024-07-24 19:06:14.842946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.344 [2024-07-24 19:06:14.855456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.344 [2024-07-24 19:06:14.855486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.344 [2024-07-24 19:06:14.855503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.344 [2024-07-24 19:06:14.866788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1959cb0) 00:23:37.344 [2024-07-24 19:06:14.866815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.344 [2024-07-24 19:06:14.866845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.344 00:23:37.344 Latency(us) 00:23:37.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.344 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:37.344 nvme0n1 : 2.00 18738.27 73.20 0.00 0.00 6821.16 3179.71 19612.25 00:23:37.344 =================================================================================================================== 00:23:37.344 Total : 18738.27 73.20 0.00 0.00 6821.16 3179.71 19612.25 00:23:37.344 0 00:23:37.344 19:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:37.344 19:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:37.344 19:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:37.344 19:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:37.344 | .driver_specific 00:23:37.344 | .nvme_error 00:23:37.344 | .status_code 00:23:37.344 | .command_transient_transport_error' 00:23:37.601 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 147 > 0 )) 00:23:37.601 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3244589 00:23:37.601 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3244589 ']' 00:23:37.601 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3244589 00:23:37.601 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:23:37.602 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:37.602 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3244589 00:23:37.602 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:37.602 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:37.602 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3244589' 00:23:37.602 killing process with pid 3244589 00:23:37.602 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3244589 00:23:37.602 Received shutdown signal, test time was about 2.000000 seconds 00:23:37.602 00:23:37.602 Latency(us) 00:23:37.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.602 =================================================================================================================== 00:23:37.602 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:37.602 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3244589 00:23:37.859 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:23:37.859 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:37.859 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:37.859 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:23:38.118 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:23:38.118 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3244993 00:23:38.118 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:23:38.118 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3244993 /var/tmp/bperf.sock 00:23:38.118 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3244993 ']' 00:23:38.118 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:38.118 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:38.118 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:38.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:38.118 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:38.118 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:38.118 [2024-07-24 19:06:15.507037] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:23:38.118 [2024-07-24 19:06:15.507130] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3244993 ] 00:23:38.118 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:38.118 Zero copy mechanism will not be used. 00:23:38.118 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.118 [2024-07-24 19:06:15.567706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.118 [2024-07-24 19:06:15.682697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.376 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:38.376 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:38.376 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:38.376 19:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:38.634 19:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:38.634 19:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.634 19:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:38.634 19:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.634 19:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:38.634 19:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:39.200 nvme0n1 00:23:39.200 19:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:39.200 19:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.200 19:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:39.200 19:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.200 19:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:39.200 19:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:39.200 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:39.200 Zero copy mechanism will not be used. 00:23:39.200 Running I/O for 2 seconds... 00:23:39.200 [2024-07-24 19:06:16.641639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.200 [2024-07-24 19:06:16.641691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.200 [2024-07-24 19:06:16.641712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.200 [2024-07-24 19:06:16.651980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.200 [2024-07-24 19:06:16.652016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.200 [2024-07-24 19:06:16.652036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.200 [2024-07-24 19:06:16.662236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.200 [2024-07-24 19:06:16.662266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.200 [2024-07-24 19:06:16.662298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.200 [2024-07-24 19:06:16.672429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.200 [2024-07-24 19:06:16.672479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.200 [2024-07-24 19:06:16.672499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.200 [2024-07-24 19:06:16.682715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.200 [2024-07-24 19:06:16.682750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.200 [2024-07-24 19:06:16.682770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.200 [2024-07-24 19:06:16.692887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.200 [2024-07-24 19:06:16.692922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.200 [2024-07-24 19:06:16.692941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.200 [2024-07-24 19:06:16.703334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.200 [2024-07-24 19:06:16.703364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.200 [2024-07-24 19:06:16.703398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.200 [2024-07-24 19:06:16.713672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.200 [2024-07-24 19:06:16.713708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.200 [2024-07-24 19:06:16.713727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.200 [2024-07-24 19:06:16.723994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.200 [2024-07-24 19:06:16.724029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.200 [2024-07-24 19:06:16.724048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.200 [2024-07-24 19:06:16.734294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.200 [2024-07-24 19:06:16.734325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.200 [2024-07-24 19:06:16.734343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.200 [2024-07-24 19:06:16.744553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.200 [2024-07-24 19:06:16.744587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.200 [2024-07-24 19:06:16.744613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.200 [2024-07-24 19:06:16.754720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.200 [2024-07-24 19:06:16.754754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.200 [2024-07-24 19:06:16.754774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.200 [2024-07-24 19:06:16.765070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.200 [2024-07-24 19:06:16.765113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.200 [2024-07-24 19:06:16.765136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.200 [2024-07-24 19:06:16.775383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.200 [2024-07-24 19:06:16.775429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.200 [2024-07-24 19:06:16.775449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.200 [2024-07-24 19:06:16.785638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.200 [2024-07-24 19:06:16.785678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.201 [2024-07-24 19:06:16.785698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.201 [2024-07-24 19:06:16.795891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.201 [2024-07-24 19:06:16.795926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.201 [2024-07-24 19:06:16.795945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.459 [2024-07-24 19:06:16.806330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.459 [2024-07-24 19:06:16.806376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.459 [2024-07-24 19:06:16.806395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.459 [2024-07-24 19:06:16.816543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.459 [2024-07-24 19:06:16.816577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.459 [2024-07-24 19:06:16.816597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.459 [2024-07-24 19:06:16.826799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.459 [2024-07-24 19:06:16.826834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:16.826853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.460 [2024-07-24 19:06:16.837039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.460 [2024-07-24 19:06:16.837078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:16.837098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.460 [2024-07-24 19:06:16.847111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.460 [2024-07-24 19:06:16.847160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:16.847177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.460 [2024-07-24 19:06:16.857191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.460 [2024-07-24 19:06:16.857221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:16.857253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.460 [2024-07-24 19:06:16.867329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.460 [2024-07-24 19:06:16.867358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:16.867389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.460 [2024-07-24 19:06:16.877439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.460 [2024-07-24 19:06:16.877468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:16.877483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.460 [2024-07-24 19:06:16.887633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.460 [2024-07-24 19:06:16.887667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:16.887686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.460 [2024-07-24 19:06:16.898261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.460 [2024-07-24 19:06:16.898307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:16.898325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.460 [2024-07-24 19:06:16.908919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.460 [2024-07-24 19:06:16.908953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:16.908973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.460 [2024-07-24 19:06:16.919603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.460 [2024-07-24 19:06:16.919639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:16.919658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.460 [2024-07-24 19:06:16.929899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.460 [2024-07-24 19:06:16.929934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:16.929953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.460 [2024-07-24 19:06:16.939906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.460 [2024-07-24 19:06:16.939940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:16.939959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.460 [2024-07-24 19:06:16.950048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.460 [2024-07-24 19:06:16.950082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:16.950111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.460 [2024-07-24 19:06:16.960014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.460 [2024-07-24 19:06:16.960049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:16.960068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.460 [2024-07-24 19:06:16.970042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.460 [2024-07-24 19:06:16.970077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:16.970096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.460 [2024-07-24 19:06:16.980047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.460 [2024-07-24 19:06:16.980082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:16.980113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.460 [2024-07-24 19:06:16.990128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.460 [2024-07-24 19:06:16.990174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:16.990191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.460 [2024-07-24 19:06:17.000155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.460 [2024-07-24 19:06:17.000187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:17.000206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.460 [2024-07-24 19:06:17.010545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.460 [2024-07-24 19:06:17.010582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:17.010608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.460 [2024-07-24 19:06:17.021397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.460 [2024-07-24 19:06:17.021447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:17.021467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.460 [2024-07-24 19:06:17.031543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.460 [2024-07-24 19:06:17.031578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:17.031597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.460 [2024-07-24 19:06:17.041709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.460 [2024-07-24 19:06:17.041746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:17.041765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.460 [2024-07-24 19:06:17.052086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.460 [2024-07-24 19:06:17.052133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.460 [2024-07-24 19:06:17.052153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.719 [2024-07-24 19:06:17.062379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.719 [2024-07-24 19:06:17.062437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.719 [2024-07-24 19:06:17.062457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.719 [2024-07-24 19:06:17.072643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.719 [2024-07-24 19:06:17.072679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.719 [2024-07-24 19:06:17.072699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.719 [2024-07-24 19:06:17.082830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.719 [2024-07-24 19:06:17.082866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.719 [2024-07-24 19:06:17.082885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.719 [2024-07-24 19:06:17.092892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.719 [2024-07-24 19:06:17.092928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.719 [2024-07-24 19:06:17.092947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.719 [2024-07-24 19:06:17.103114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.719 [2024-07-24 19:06:17.103161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.719 [2024-07-24 19:06:17.103176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.719 [2024-07-24 19:06:17.113122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.719 [2024-07-24 19:06:17.113168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.719 [2024-07-24 19:06:17.113184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.719 [2024-07-24 19:06:17.123098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.719 [2024-07-24 19:06:17.123158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.719 [2024-07-24 19:06:17.123175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.719 [2024-07-24 19:06:17.133395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.719 [2024-07-24 19:06:17.133425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.719 [2024-07-24 19:06:17.133457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.719 [2024-07-24 19:06:17.143557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.719 [2024-07-24 19:06:17.143593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.719 [2024-07-24 19:06:17.143615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.719 [2024-07-24 19:06:17.153708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.719 [2024-07-24 19:06:17.153744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.719 [2024-07-24 19:06:17.153763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.719 [2024-07-24 19:06:17.163944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.719 [2024-07-24 19:06:17.163992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.719 [2024-07-24 19:06:17.164012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.719 [2024-07-24 19:06:17.174131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.719 [2024-07-24 19:06:17.174177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.719 [2024-07-24 19:06:17.174193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.719 [2024-07-24 19:06:17.184123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.719 [2024-07-24 19:06:17.184168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.719 [2024-07-24 19:06:17.184188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.719 [2024-07-24 19:06:17.194237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.719 [2024-07-24 19:06:17.194267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.719 [2024-07-24 19:06:17.194299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.719 [2024-07-24 19:06:17.204331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.719 [2024-07-24 19:06:17.204361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.719 [2024-07-24 19:06:17.204392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.719 [2024-07-24 19:06:17.214490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.719 [2024-07-24 19:06:17.214525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.719 [2024-07-24 19:06:17.214544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.719 [2024-07-24 19:06:17.224450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.719 [2024-07-24 19:06:17.224486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.720 [2024-07-24 19:06:17.224505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.720 [2024-07-24 19:06:17.234607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.720 [2024-07-24 19:06:17.234643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.720 [2024-07-24 19:06:17.234662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.720 [2024-07-24 19:06:17.244745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.720 [2024-07-24 19:06:17.244782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.720 [2024-07-24 19:06:17.244801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.720 [2024-07-24 19:06:17.255166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.720 [2024-07-24 19:06:17.255196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.720 [2024-07-24 19:06:17.255228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.720 [2024-07-24 19:06:17.265425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.720 [2024-07-24 19:06:17.265461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.720 [2024-07-24 19:06:17.265480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.720 [2024-07-24 19:06:17.275506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.720 [2024-07-24 19:06:17.275547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.720 [2024-07-24 19:06:17.275568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.720 [2024-07-24 19:06:17.285647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.720 [2024-07-24 19:06:17.285682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.720 [2024-07-24 19:06:17.285701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.720 [2024-07-24 19:06:17.295705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.720 [2024-07-24 19:06:17.295739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.720 [2024-07-24 19:06:17.295758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.720 [2024-07-24 19:06:17.305752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.720 [2024-07-24 19:06:17.305787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.720 [2024-07-24 19:06:17.305806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.720 [2024-07-24 19:06:17.315765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.720 [2024-07-24 19:06:17.315800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.720 [2024-07-24 19:06:17.315819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.978 [2024-07-24 19:06:17.325925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.978 [2024-07-24 19:06:17.325961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.978 [2024-07-24 19:06:17.325980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.978 [2024-07-24 19:06:17.335948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.978 [2024-07-24 19:06:17.335982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.978 [2024-07-24 19:06:17.336001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.346190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.346236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.346252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.356329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.356359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.356375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.366524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.366560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.366579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.376768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.376803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.376823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.387019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.387054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.387073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.397085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.397144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.397162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.407398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.407446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.407466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.417537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.417571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.417590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.427553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.427588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.427608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.437560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.437595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.437614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.447617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.447652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.447678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.457834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.457869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.457888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.467880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.467915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.467934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.477929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.477964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.477983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.488006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.488040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.488059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.498078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.498122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.498158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.508506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.508540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.508559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.518590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.518625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.518644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.528724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.528759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.528779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.538881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.538921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.538941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.549241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.549284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.549302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.559287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.559316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.559347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.569368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.569411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.569427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.979 [2024-07-24 19:06:17.579540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:39.979 [2024-07-24 19:06:17.579577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.979 [2024-07-24 19:06:17.579598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.238 [2024-07-24 19:06:17.589759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.238 [2024-07-24 19:06:17.589795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.238 [2024-07-24 19:06:17.589815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.238 [2024-07-24 19:06:17.599936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.238 [2024-07-24 19:06:17.599971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.238 [2024-07-24 19:06:17.599990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.238 [2024-07-24 19:06:17.610092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.238 [2024-07-24 19:06:17.610135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.238 [2024-07-24 19:06:17.610168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.238 [2024-07-24 19:06:17.620161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.238 [2024-07-24 19:06:17.620204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.238 [2024-07-24 19:06:17.620226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.238 [2024-07-24 19:06:17.630308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.238 [2024-07-24 19:06:17.630336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.238 [2024-07-24 19:06:17.630367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.238 [2024-07-24 19:06:17.640388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.238 [2024-07-24 19:06:17.640430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.238 [2024-07-24 19:06:17.640446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.238 [2024-07-24 19:06:17.650605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.238 [2024-07-24 19:06:17.650640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.238 [2024-07-24 19:06:17.650659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.238 [2024-07-24 19:06:17.660771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.238 [2024-07-24 19:06:17.660806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.238 [2024-07-24 19:06:17.660825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.238 [2024-07-24 19:06:17.670832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.238 [2024-07-24 19:06:17.670867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.238 [2024-07-24 19:06:17.670887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.238 [2024-07-24 19:06:17.681074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.238 [2024-07-24 19:06:17.681115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.238 [2024-07-24 19:06:17.681137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.238 [2024-07-24 19:06:17.691132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.238 [2024-07-24 19:06:17.691175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.238 [2024-07-24 19:06:17.691191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.238 [2024-07-24 19:06:17.701096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.238 [2024-07-24 19:06:17.701153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.238 [2024-07-24 19:06:17.701170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.238 [2024-07-24 19:06:17.711110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.238 [2024-07-24 19:06:17.711165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.238 [2024-07-24 19:06:17.711183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.238 [2024-07-24 19:06:17.721304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.238 [2024-07-24 19:06:17.721337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.238 [2024-07-24 19:06:17.721354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.238 [2024-07-24 19:06:17.731467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.238 [2024-07-24 19:06:17.731511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.238 [2024-07-24 19:06:17.731531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.238 [2024-07-24 19:06:17.741582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.238 [2024-07-24 19:06:17.741618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.238 [2024-07-24 19:06:17.741637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.238 [2024-07-24 19:06:17.751843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.238 [2024-07-24 19:06:17.751877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.238 [2024-07-24 19:06:17.751897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.238 [2024-07-24 19:06:17.761934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.238 [2024-07-24 19:06:17.761968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.238 [2024-07-24 19:06:17.761987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.238 [2024-07-24 19:06:17.771972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.238 [2024-07-24 19:06:17.772007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.238 [2024-07-24 19:06:17.772026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.238 [2024-07-24 19:06:17.782090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.238 [2024-07-24 19:06:17.782134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.238 [2024-07-24 19:06:17.782166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.239 [2024-07-24 19:06:17.792155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.239 [2024-07-24 19:06:17.792195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.239 [2024-07-24 19:06:17.792227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.239 [2024-07-24 19:06:17.802445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.239 [2024-07-24 19:06:17.802480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.239 [2024-07-24 19:06:17.802499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.239 [2024-07-24 19:06:17.812528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.239 [2024-07-24 19:06:17.812563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.239 [2024-07-24 19:06:17.812582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.239 [2024-07-24 19:06:17.822609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.239 [2024-07-24 19:06:17.822645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.239 [2024-07-24 19:06:17.822664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.239 [2024-07-24 19:06:17.832717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.239 [2024-07-24 19:06:17.832752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.239 [2024-07-24 19:06:17.832771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.498 [2024-07-24 19:06:17.842877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.498 [2024-07-24 19:06:17.842913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.498 [2024-07-24 19:06:17.842932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.498 [2024-07-24 19:06:17.852964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.498 [2024-07-24 19:06:17.853000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.498 [2024-07-24 19:06:17.853019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.498 [2024-07-24 19:06:17.863127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.498 [2024-07-24 19:06:17.863176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.498 [2024-07-24 19:06:17.863193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.498 [2024-07-24 19:06:17.873363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.498 [2024-07-24 19:06:17.873393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.498 [2024-07-24 19:06:17.873409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.498 [2024-07-24 19:06:17.883452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.498 [2024-07-24 19:06:17.883487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.498 [2024-07-24 19:06:17.883512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.498 [2024-07-24 19:06:17.893587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.498 [2024-07-24 19:06:17.893622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.498 [2024-07-24 19:06:17.893641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.498 [2024-07-24 19:06:17.903559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.498 [2024-07-24 19:06:17.903590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.498 [2024-07-24 19:06:17.903622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.498 [2024-07-24 19:06:17.914559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.498 [2024-07-24 19:06:17.914595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.498 [2024-07-24 19:06:17.914615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.498 [2024-07-24 19:06:17.924834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.498 [2024-07-24 19:06:17.924870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.498 [2024-07-24 19:06:17.924889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.498 [2024-07-24 19:06:17.934986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.498 [2024-07-24 19:06:17.935021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.498 [2024-07-24 19:06:17.935041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.498 [2024-07-24 19:06:17.945111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.498 [2024-07-24 19:06:17.945146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.498 [2024-07-24 19:06:17.945183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.498 [2024-07-24 19:06:17.955113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.498 [2024-07-24 19:06:17.955160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.498 [2024-07-24 19:06:17.955177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.498 [2024-07-24 19:06:17.965126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.498 [2024-07-24 19:06:17.965171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.498 [2024-07-24 19:06:17.965188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.498 [2024-07-24 19:06:17.974469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.498 [2024-07-24 19:06:17.974506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.498 [2024-07-24 19:06:17.974524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.498 [2024-07-24 19:06:17.983852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.498 [2024-07-24 19:06:17.983896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.498 [2024-07-24 19:06:17.983912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.498 [2024-07-24 19:06:17.993152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.498 [2024-07-24 19:06:17.993197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.498 [2024-07-24 19:06:17.993215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.498 [2024-07-24 19:06:18.002518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.498 [2024-07-24 19:06:18.002561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.498 [2024-07-24 19:06:18.002578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.498 [2024-07-24 19:06:18.011942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.498 [2024-07-24 19:06:18.011972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.498 [2024-07-24 19:06:18.012003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.498 [2024-07-24 19:06:18.021182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.498 [2024-07-24 19:06:18.021212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.498 [2024-07-24 19:06:18.021244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.498 [2024-07-24 19:06:18.030539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.498 [2024-07-24 19:06:18.030583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.498 [2024-07-24 19:06:18.030599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.498 [2024-07-24 19:06:18.040053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.498 [2024-07-24 19:06:18.040084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.498 [2024-07-24 19:06:18.040124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.498 [2024-07-24 19:06:18.049390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.498 [2024-07-24 19:06:18.049433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.498 [2024-07-24 19:06:18.049448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.499 [2024-07-24 19:06:18.058838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.499 [2024-07-24 19:06:18.058882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.499 [2024-07-24 19:06:18.058898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.499 [2024-07-24 19:06:18.068141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.499 [2024-07-24 19:06:18.068186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.499 [2024-07-24 19:06:18.068202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.499 [2024-07-24 19:06:18.077443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.499 [2024-07-24 19:06:18.077487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.499 [2024-07-24 19:06:18.077504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.499 [2024-07-24 19:06:18.086702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.499 [2024-07-24 19:06:18.086732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.499 [2024-07-24 19:06:18.086767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.499 [2024-07-24 19:06:18.096178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.499 [2024-07-24 19:06:18.096224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.499 [2024-07-24 19:06:18.096242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.757 [2024-07-24 19:06:18.105582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.757 [2024-07-24 19:06:18.105613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.757 [2024-07-24 19:06:18.105644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.757 [2024-07-24 19:06:18.115028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.757 [2024-07-24 19:06:18.115058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.757 [2024-07-24 19:06:18.115089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.757 [2024-07-24 19:06:18.124499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.757 [2024-07-24 19:06:18.124545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.757 [2024-07-24 19:06:18.124560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.757 [2024-07-24 19:06:18.134019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.757 [2024-07-24 19:06:18.134048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.757 [2024-07-24 19:06:18.134086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.757 [2024-07-24 19:06:18.143991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.757 [2024-07-24 19:06:18.144021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.757 [2024-07-24 19:06:18.144037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.757 [2024-07-24 19:06:18.153961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.757 [2024-07-24 19:06:18.153991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.757 [2024-07-24 19:06:18.154023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.757 [2024-07-24 19:06:18.163951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.757 [2024-07-24 19:06:18.163996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.757 [2024-07-24 19:06:18.164013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.757 [2024-07-24 19:06:18.173657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.757 [2024-07-24 19:06:18.173702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.757 [2024-07-24 19:06:18.173719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.757 [2024-07-24 19:06:18.183078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.757 [2024-07-24 19:06:18.183135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.757 [2024-07-24 19:06:18.183153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.757 [2024-07-24 19:06:18.192572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.757 [2024-07-24 19:06:18.192601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.757 [2024-07-24 19:06:18.192632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.758 [2024-07-24 19:06:18.202037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.758 [2024-07-24 19:06:18.202067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.758 [2024-07-24 19:06:18.202098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.758 [2024-07-24 19:06:18.211358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.758 [2024-07-24 19:06:18.211403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.758 [2024-07-24 19:06:18.211419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.758 [2024-07-24 19:06:18.220671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.758 [2024-07-24 19:06:18.220714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.758 [2024-07-24 19:06:18.220731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.758 [2024-07-24 19:06:18.230384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.758 [2024-07-24 19:06:18.230414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.758 [2024-07-24 19:06:18.230444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.758 [2024-07-24 19:06:18.240497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.758 [2024-07-24 19:06:18.240527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.758 [2024-07-24 19:06:18.240558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.758 [2024-07-24 19:06:18.250428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.758 [2024-07-24 19:06:18.250473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.758 [2024-07-24 19:06:18.250489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.758 [2024-07-24 19:06:18.260536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.758 [2024-07-24 19:06:18.260580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.758 [2024-07-24 19:06:18.260596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.758 [2024-07-24 19:06:18.269758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.758 [2024-07-24 19:06:18.269804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.758 [2024-07-24 19:06:18.269820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.758 [2024-07-24 19:06:18.279206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.758 [2024-07-24 19:06:18.279253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.758 [2024-07-24 19:06:18.279269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.758 [2024-07-24 19:06:18.288580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.758 [2024-07-24 19:06:18.288611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.758 [2024-07-24 19:06:18.288643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.758 [2024-07-24 19:06:18.297876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.758 [2024-07-24 19:06:18.297908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.758 [2024-07-24 19:06:18.297946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.758 [2024-07-24 19:06:18.307384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.758 [2024-07-24 19:06:18.307435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.758 [2024-07-24 19:06:18.307453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.758 [2024-07-24 19:06:18.316778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.758 [2024-07-24 19:06:18.316823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.758 [2024-07-24 19:06:18.316839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.758 [2024-07-24 19:06:18.326601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.758 [2024-07-24 19:06:18.326632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.758 [2024-07-24 19:06:18.326664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.758 [2024-07-24 19:06:18.336811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.758 [2024-07-24 19:06:18.336840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.758 [2024-07-24 19:06:18.336871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.758 [2024-07-24 19:06:18.347022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.758 [2024-07-24 19:06:18.347053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.758 [2024-07-24 19:06:18.347084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.758 [2024-07-24 19:06:18.357126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:40.758 [2024-07-24 19:06:18.357170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.758 [2024-07-24 19:06:18.357187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:41.017 [2024-07-24 19:06:18.366610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.017 [2024-07-24 19:06:18.366641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.017 [2024-07-24 19:06:18.366673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.017 [2024-07-24 19:06:18.375934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.017 [2024-07-24 19:06:18.375964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.017 [2024-07-24 19:06:18.375996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.017 [2024-07-24 19:06:18.385572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.017 [2024-07-24 19:06:18.385612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.017 [2024-07-24 19:06:18.385646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:41.017 [2024-07-24 19:06:18.395293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.017 [2024-07-24 19:06:18.395324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.017 [2024-07-24 19:06:18.395356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:41.017 [2024-07-24 19:06:18.405050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.017 [2024-07-24 19:06:18.405082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.017 [2024-07-24 19:06:18.405124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.017 [2024-07-24 19:06:18.414628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.017 [2024-07-24 19:06:18.414659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.017 [2024-07-24 19:06:18.414690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.017 [2024-07-24 19:06:18.424655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.017 [2024-07-24 19:06:18.424685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.017 [2024-07-24 19:06:18.424717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:41.017 [2024-07-24 19:06:18.434794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.017 [2024-07-24 19:06:18.434826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.017 [2024-07-24 19:06:18.434859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:41.017 [2024-07-24 19:06:18.443635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.017 [2024-07-24 19:06:18.443681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.017 [2024-07-24 19:06:18.443697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.017 [2024-07-24 19:06:18.454037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.017 [2024-07-24 19:06:18.454068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.017 [2024-07-24 19:06:18.454084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.017 [2024-07-24 19:06:18.463856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.017 [2024-07-24 19:06:18.463886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.017 [2024-07-24 19:06:18.463902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:41.017 [2024-07-24 19:06:18.473380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.017 [2024-07-24 19:06:18.473426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.017 [2024-07-24 19:06:18.473443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:41.017 [2024-07-24 19:06:18.482891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.017 [2024-07-24 19:06:18.482922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.017 [2024-07-24 19:06:18.482939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.017 [2024-07-24 19:06:18.492429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.017 [2024-07-24 19:06:18.492459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.017 [2024-07-24 19:06:18.492476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.017 [2024-07-24 19:06:18.501870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.017 [2024-07-24 19:06:18.501915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.017 [2024-07-24 19:06:18.501931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:41.017 [2024-07-24 19:06:18.511431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.017 [2024-07-24 19:06:18.511462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.017 [2024-07-24 19:06:18.511480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:41.017 [2024-07-24 19:06:18.521030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.017 [2024-07-24 19:06:18.521075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.017 [2024-07-24 19:06:18.521091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.017 [2024-07-24 19:06:18.530489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.017 [2024-07-24 19:06:18.530521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.017 [2024-07-24 19:06:18.530553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.017 [2024-07-24 19:06:18.540025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.017 [2024-07-24 19:06:18.540070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.017 [2024-07-24 19:06:18.540087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:41.017 [2024-07-24 19:06:18.549467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.017 [2024-07-24 19:06:18.549513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.017 [2024-07-24 19:06:18.549550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:41.017 [2024-07-24 19:06:18.558989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.017 [2024-07-24 19:06:18.559019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.017 [2024-07-24 19:06:18.559053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.017 [2024-07-24 19:06:18.568381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.018 [2024-07-24 19:06:18.568424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.018 [2024-07-24 19:06:18.568440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.018 [2024-07-24 19:06:18.577734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.018 [2024-07-24 19:06:18.577780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.018 [2024-07-24 19:06:18.577798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:41.018 [2024-07-24 19:06:18.587036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.018 [2024-07-24 19:06:18.587066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.018 [2024-07-24 19:06:18.587098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:41.018 [2024-07-24 19:06:18.596630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.018 [2024-07-24 19:06:18.596676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.018 [2024-07-24 19:06:18.596693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.018 [2024-07-24 19:06:18.606076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.018 [2024-07-24 19:06:18.606116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.018 [2024-07-24 19:06:18.606135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.018 [2024-07-24 19:06:18.615376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.018 [2024-07-24 19:06:18.615424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.018 [2024-07-24 19:06:18.615441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:41.276 [2024-07-24 19:06:18.624780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1196290) 00:23:41.276 [2024-07-24 19:06:18.624812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.276 [2024-07-24 19:06:18.624845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:41.276 00:23:41.276 Latency(us) 00:23:41.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.276 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:41.276 nvme0n1 : 2.00 3108.24 388.53 0.00 0.00 5143.04 1535.24 13301.38 00:23:41.276 =================================================================================================================== 00:23:41.276 Total : 3108.24 388.53 0.00 0.00 5143.04 1535.24 13301.38 00:23:41.276 0 00:23:41.276 19:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:41.276 19:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:41.276 19:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:41.276 19:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:41.276 | .driver_specific 00:23:41.276 | .nvme_error 00:23:41.276 | .status_code 00:23:41.276 | .command_transient_transport_error' 00:23:41.534 19:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 200 > 0 )) 00:23:41.534 19:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3244993 00:23:41.534 19:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3244993 ']' 00:23:41.534 19:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3244993 00:23:41.534 19:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:23:41.534 19:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:41.534 19:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3244993 00:23:41.534 19:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:41.534 19:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:41.534 19:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3244993' 00:23:41.534 killing process with pid 3244993 00:23:41.534 19:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3244993 00:23:41.534 Received shutdown signal, test time was about 2.000000 seconds 00:23:41.534 00:23:41.534 Latency(us) 00:23:41.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.534 =================================================================================================================== 00:23:41.534 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:41.534 19:06:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3244993 00:23:41.792 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:23:41.792 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:41.792 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:23:41.792 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:41.792 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:41.792 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3245403 00:23:41.792 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:41.792 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3245403 /var/tmp/bperf.sock 00:23:41.792 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3245403 ']' 00:23:41.792 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:41.792 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:41.792 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:41.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:41.792 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:41.792 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:41.792 [2024-07-24 19:06:19.253001] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:23:41.792 [2024-07-24 19:06:19.253082] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3245403 ] 00:23:41.792 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.792 [2024-07-24 19:06:19.314948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.050 [2024-07-24 19:06:19.433667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.050 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:42.050 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:42.050 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:42.050 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:42.307 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:42.307 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.307 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:42.307 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.307 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:42.307 19:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:42.564 nvme0n1 00:23:42.822 19:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:42.822 19:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.822 19:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:42.822 19:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.822 19:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:42.822 19:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:42.822 Running I/O for 2 seconds... 00:23:42.822 [2024-07-24 19:06:20.286904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190ed920 00:23:42.822 [2024-07-24 19:06:20.288058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.822 [2024-07-24 19:06:20.288119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:42.822 [2024-07-24 19:06:20.299308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f9f68 00:23:42.822 [2024-07-24 19:06:20.300410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.822 [2024-07-24 19:06:20.300443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:42.823 [2024-07-24 19:06:20.312917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e38d0 00:23:42.823 [2024-07-24 19:06:20.314207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.823 [2024-07-24 19:06:20.314237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:42.823 [2024-07-24 19:06:20.326352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4de8 00:23:42.823 [2024-07-24 19:06:20.327839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.823 [2024-07-24 19:06:20.327871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:42.823 [2024-07-24 19:06:20.339856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e7818 00:23:42.823 [2024-07-24 19:06:20.341544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.823 [2024-07-24 19:06:20.341575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:42.823 [2024-07-24 19:06:20.351806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f9f68 00:23:42.823 [2024-07-24 19:06:20.352942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.823 [2024-07-24 19:06:20.352973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:42.823 [2024-07-24 19:06:20.364823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190ebfd0 00:23:42.823 [2024-07-24 19:06:20.365770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.823 [2024-07-24 19:06:20.365801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:42.823 [2024-07-24 19:06:20.378264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e7c50 00:23:42.823 [2024-07-24 19:06:20.379426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.823 [2024-07-24 19:06:20.379458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:42.823 [2024-07-24 19:06:20.390226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190fda78 00:23:42.823 [2024-07-24 19:06:20.392348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.823 [2024-07-24 19:06:20.392376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:42.823 [2024-07-24 19:06:20.401433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190ec408 00:23:42.823 [2024-07-24 19:06:20.402467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.823 [2024-07-24 19:06:20.402498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:42.823 [2024-07-24 19:06:20.415774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190edd58 00:23:42.823 [2024-07-24 19:06:20.417188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:42.823 [2024-07-24 19:06:20.417216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:43.081 [2024-07-24 19:06:20.429488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190ea680 00:23:43.081 [2024-07-24 19:06:20.430785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.081 [2024-07-24 19:06:20.430818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:43.081 [2024-07-24 19:06:20.441597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4de8 00:23:43.081 [2024-07-24 19:06:20.442892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.081 [2024-07-24 19:06:20.442922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:43.081 [2024-07-24 19:06:20.455863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f31b8 00:23:43.081 [2024-07-24 19:06:20.457422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.081 [2024-07-24 19:06:20.457453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:43.081 [2024-07-24 19:06:20.467883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e38d0 00:23:43.081 [2024-07-24 19:06:20.469354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.081 [2024-07-24 19:06:20.469382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:43.081 [2024-07-24 19:06:20.481322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190df550 00:23:43.081 [2024-07-24 19:06:20.482991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.081 [2024-07-24 19:06:20.483022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:43.081 [2024-07-24 19:06:20.493349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190feb58 00:23:43.081 [2024-07-24 19:06:20.494464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.081 [2024-07-24 19:06:20.494508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:43.081 [2024-07-24 19:06:20.507642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190fdeb0 00:23:43.081 [2024-07-24 19:06:20.509486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.081 [2024-07-24 19:06:20.509517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:43.081 [2024-07-24 19:06:20.521100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f8618 00:23:43.081 [2024-07-24 19:06:20.523117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.081 [2024-07-24 19:06:20.523148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:43.081 [2024-07-24 19:06:20.533034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190edd58 00:23:43.081 [2024-07-24 19:06:20.534518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.081 [2024-07-24 19:06:20.534550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:43.081 [2024-07-24 19:06:20.544693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190ef270 00:23:43.081 [2024-07-24 19:06:20.546606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.081 [2024-07-24 19:06:20.546637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:43.081 [2024-07-24 19:06:20.556558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e8d30 00:23:43.081 [2024-07-24 19:06:20.557551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.081 [2024-07-24 19:06:20.557583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:43.081 [2024-07-24 19:06:20.569886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f2d80 00:23:43.081 [2024-07-24 19:06:20.571025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.081 [2024-07-24 19:06:20.571056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:43.081 [2024-07-24 19:06:20.582013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190eaab8 00:23:43.081 [2024-07-24 19:06:20.583165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.081 [2024-07-24 19:06:20.583192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:43.081 [2024-07-24 19:06:20.596334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e88f8 00:23:43.081 [2024-07-24 19:06:20.597678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.081 [2024-07-24 19:06:20.597714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:43.081 [2024-07-24 19:06:20.609209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e7818 00:23:43.081 [2024-07-24 19:06:20.610566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.081 [2024-07-24 19:06:20.610598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:43.081 [2024-07-24 19:06:20.621231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190eff18 00:23:43.081 [2024-07-24 19:06:20.622559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.081 [2024-07-24 19:06:20.622595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:43.081 [2024-07-24 19:06:20.635559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f4b08 00:23:43.081 [2024-07-24 19:06:20.637069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.081 [2024-07-24 19:06:20.637118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:43.081 [2024-07-24 19:06:20.648825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190fac10 00:23:43.081 [2024-07-24 19:06:20.650520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.081 [2024-07-24 19:06:20.650551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:43.081 [2024-07-24 19:06:20.660920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190ebb98 00:23:43.081 [2024-07-24 19:06:20.662573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.081 [2024-07-24 19:06:20.662604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:43.081 [2024-07-24 19:06:20.672826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190ecc78 00:23:43.081 [2024-07-24 19:06:20.673992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.081 [2024-07-24 19:06:20.674023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:43.339 [2024-07-24 19:06:20.686055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e01f8 00:23:43.339 [2024-07-24 19:06:20.687003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.339 [2024-07-24 19:06:20.687036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:43.339 [2024-07-24 19:06:20.700736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f92c0 00:23:43.339 [2024-07-24 19:06:20.702723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.339 [2024-07-24 19:06:20.702754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:43.339 [2024-07-24 19:06:20.712716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f46d0 00:23:43.339 [2024-07-24 19:06:20.714219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.339 [2024-07-24 19:06:20.714248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:43.339 [2024-07-24 19:06:20.725383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e9e10 00:23:43.339 [2024-07-24 19:06:20.726866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.339 [2024-07-24 19:06:20.726897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:43.339 [2024-07-24 19:06:20.738075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190fa7d8 00:23:43.339 [2024-07-24 19:06:20.739557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.339 [2024-07-24 19:06:20.739588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:43.339 [2024-07-24 19:06:20.750794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190df550 00:23:43.339 [2024-07-24 19:06:20.752380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.339 [2024-07-24 19:06:20.752408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:43.339 [2024-07-24 19:06:20.762696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e8088 00:23:43.340 [2024-07-24 19:06:20.764645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.340 [2024-07-24 19:06:20.764676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:43.340 [2024-07-24 19:06:20.773660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190df988 00:23:43.340 [2024-07-24 19:06:20.774614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.340 [2024-07-24 19:06:20.774643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:43.340 [2024-07-24 19:06:20.787093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e23b8 00:23:43.340 [2024-07-24 19:06:20.788216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.340 [2024-07-24 19:06:20.788244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:43.340 [2024-07-24 19:06:20.800469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e5220 00:23:43.340 [2024-07-24 19:06:20.801762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.340 [2024-07-24 19:06:20.801793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:43.340 [2024-07-24 19:06:20.814819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e12d8 00:23:43.340 [2024-07-24 19:06:20.816344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.340 [2024-07-24 19:06:20.816371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:43.340 [2024-07-24 19:06:20.828029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190eaef0 00:23:43.340 [2024-07-24 19:06:20.829687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.340 [2024-07-24 19:06:20.829718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:43.340 [2024-07-24 19:06:20.838547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:43.340 [2024-07-24 19:06:20.839473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.340 [2024-07-24 19:06:20.839504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:43.340 [2024-07-24 19:06:20.851228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e6fa8 00:23:43.340 [2024-07-24 19:06:20.852156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.340 [2024-07-24 19:06:20.852183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:43.340 [2024-07-24 19:06:20.865635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f57b0 00:23:43.340 [2024-07-24 19:06:20.867304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.340 [2024-07-24 19:06:20.867331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:43.340 [2024-07-24 19:06:20.877575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e7c50 00:23:43.340 [2024-07-24 19:06:20.878675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.340 [2024-07-24 19:06:20.878705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:43.340 [2024-07-24 19:06:20.890060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f0bc0 00:23:43.340 [2024-07-24 19:06:20.891184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.340 [2024-07-24 19:06:20.891211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:43.340 [2024-07-24 19:06:20.902833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190ebb98 00:23:43.340 [2024-07-24 19:06:20.903938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.340 [2024-07-24 19:06:20.903968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:43.340 [2024-07-24 19:06:20.915642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190ea680 00:23:43.340 [2024-07-24 19:06:20.916745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.340 [2024-07-24 19:06:20.916774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:43.340 [2024-07-24 19:06:20.928274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190ff3c8 00:23:43.340 [2024-07-24 19:06:20.929418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.340 [2024-07-24 19:06:20.929445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:43.598 [2024-07-24 19:06:20.941536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e12d8 00:23:43.598 [2024-07-24 19:06:20.942502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.598 [2024-07-24 19:06:20.942535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:43.598 [2024-07-24 19:06:20.956183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f20d8 00:23:43.598 [2024-07-24 19:06:20.958173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.598 [2024-07-24 19:06:20.958207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:43.598 [2024-07-24 19:06:20.968056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190efae0 00:23:43.598 [2024-07-24 19:06:20.969562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.598 [2024-07-24 19:06:20.969593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:43.598 [2024-07-24 19:06:20.979660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190ec840 00:23:43.598 [2024-07-24 19:06:20.981601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.598 [2024-07-24 19:06:20.981633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:43.598 [2024-07-24 19:06:20.991517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190dfdc0 00:23:43.598 [2024-07-24 19:06:20.992494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.598 [2024-07-24 19:06:20.992525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:43.598 [2024-07-24 19:06:21.004794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f7100 00:23:43.598 [2024-07-24 19:06:21.005929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.598 [2024-07-24 19:06:21.005960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:43.598 [2024-07-24 19:06:21.016915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4140 00:23:43.598 [2024-07-24 19:06:21.018029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.598 [2024-07-24 19:06:21.018059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:43.598 [2024-07-24 19:06:21.031194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e01f8 00:23:43.598 [2024-07-24 19:06:21.032558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.598 [2024-07-24 19:06:21.032589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:43.598 [2024-07-24 19:06:21.043898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f35f0 00:23:43.598 [2024-07-24 19:06:21.045216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.598 [2024-07-24 19:06:21.045244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:43.598 [2024-07-24 19:06:21.056691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f7538 00:23:43.598 [2024-07-24 19:06:21.057985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.598 [2024-07-24 19:06:21.058015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:43.598 [2024-07-24 19:06:21.069824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f9f68 00:23:43.598 [2024-07-24 19:06:21.071380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.598 [2024-07-24 19:06:21.071407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:43.598 [2024-07-24 19:06:21.081964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190ed920 00:23:43.598 [2024-07-24 19:06:21.083464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.598 [2024-07-24 19:06:21.083494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:43.598 [2024-07-24 19:06:21.093938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f0ff8 00:23:43.598 [2024-07-24 19:06:21.094873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.598 [2024-07-24 19:06:21.094903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:43.598 [2024-07-24 19:06:21.106849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f2510 00:23:43.598 [2024-07-24 19:06:21.107586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.598 [2024-07-24 19:06:21.107618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:43.598 [2024-07-24 19:06:21.121480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f3a28 00:23:43.598 [2024-07-24 19:06:21.123312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.598 [2024-07-24 19:06:21.123339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:43.598 [2024-07-24 19:06:21.133358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f0788 00:23:43.598 [2024-07-24 19:06:21.134643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.598 [2024-07-24 19:06:21.134673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:43.598 [2024-07-24 19:06:21.145885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e6738 00:23:43.598 [2024-07-24 19:06:21.147243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.598 [2024-07-24 19:06:21.147270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:43.599 [2024-07-24 19:06:21.160321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190ea680 00:23:43.599 [2024-07-24 19:06:21.162392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.599 [2024-07-24 19:06:21.162435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:43.599 [2024-07-24 19:06:21.173686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190ec840 00:23:43.599 [2024-07-24 19:06:21.175829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.599 [2024-07-24 19:06:21.175860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:43.599 [2024-07-24 19:06:21.182759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190de470 00:23:43.599 [2024-07-24 19:06:21.183728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.599 [2024-07-24 19:06:21.183772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:43.599 [2024-07-24 19:06:21.196241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f5be8 00:23:43.599 [2024-07-24 19:06:21.197370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.599 [2024-07-24 19:06:21.197414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:43.857 [2024-07-24 19:06:21.208440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190ee5c8 00:23:43.857 [2024-07-24 19:06:21.209536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.857 [2024-07-24 19:06:21.209569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:43.857 [2024-07-24 19:06:21.221861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e0a68 00:23:43.857 [2024-07-24 19:06:21.223155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.857 [2024-07-24 19:06:21.223183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:43.857 [2024-07-24 19:06:21.236098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e84c0 00:23:43.857 [2024-07-24 19:06:21.237604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.857 [2024-07-24 19:06:21.237636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:43.857 [2024-07-24 19:06:21.249343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190de8a8 00:23:43.857 [2024-07-24 19:06:21.251019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.857 [2024-07-24 19:06:21.251051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:43.857 [2024-07-24 19:06:21.259853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4de8 00:23:43.857 [2024-07-24 19:06:21.260791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.857 [2024-07-24 19:06:21.260822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:43.857 [2024-07-24 19:06:21.272988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190fc128 00:23:43.858 [2024-07-24 19:06:21.274110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.858 [2024-07-24 19:06:21.274141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:43.858 [2024-07-24 19:06:21.285205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e73e0 00:23:43.858 [2024-07-24 19:06:21.286377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.858 [2024-07-24 19:06:21.286426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:43.858 [2024-07-24 19:06:21.299513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190df988 00:23:43.858 [2024-07-24 19:06:21.300807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.858 [2024-07-24 19:06:21.300838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:43.858 [2024-07-24 19:06:21.313872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190fda78 00:23:43.858 [2024-07-24 19:06:21.315851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.858 [2024-07-24 19:06:21.315882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:43.858 [2024-07-24 19:06:21.327225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e7c50 00:23:43.858 [2024-07-24 19:06:21.329415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.858 [2024-07-24 19:06:21.329446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:43.858 [2024-07-24 19:06:21.336332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f4b08 00:23:43.858 [2024-07-24 19:06:21.337291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.858 [2024-07-24 19:06:21.337318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:43.858 [2024-07-24 19:06:21.349271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f2d80 00:23:43.858 [2024-07-24 19:06:21.350214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.858 [2024-07-24 19:06:21.350241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:43.858 [2024-07-24 19:06:21.362046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e84c0 00:23:43.858 [2024-07-24 19:06:21.362968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.858 [2024-07-24 19:06:21.362999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:43.858 [2024-07-24 19:06:21.373933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e3060 00:23:43.858 [2024-07-24 19:06:21.374843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.858 [2024-07-24 19:06:21.374873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:43.858 [2024-07-24 19:06:21.388292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e6738 00:23:43.858 [2024-07-24 19:06:21.389551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.858 [2024-07-24 19:06:21.389581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:43.858 [2024-07-24 19:06:21.402674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190df988 00:23:43.858 [2024-07-24 19:06:21.404489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.858 [2024-07-24 19:06:21.404531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:43.858 [2024-07-24 19:06:21.414642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190ed920 00:23:43.858 [2024-07-24 19:06:21.415894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.858 [2024-07-24 19:06:21.415925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:43.858 [2024-07-24 19:06:21.427645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e5658 00:23:43.858 [2024-07-24 19:06:21.428741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.858 [2024-07-24 19:06:21.428771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:43.858 [2024-07-24 19:06:21.439720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e99d8 00:23:43.858 [2024-07-24 19:06:21.441650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.858 [2024-07-24 19:06:21.441681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:43.858 [2024-07-24 19:06:21.451625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f6cc8 00:23:43.858 [2024-07-24 19:06:21.452542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:43.858 [2024-07-24 19:06:21.452573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:44.116 [2024-07-24 19:06:21.464872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f0bc0 00:23:44.116 [2024-07-24 19:06:21.465987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.116 [2024-07-24 19:06:21.466019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:44.116 [2024-07-24 19:06:21.476940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e01f8 00:23:44.116 [2024-07-24 19:06:21.478024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.116 [2024-07-24 19:06:21.478054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:44.116 [2024-07-24 19:06:21.491255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190ebfd0 00:23:44.116 [2024-07-24 19:06:21.492561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.116 [2024-07-24 19:06:21.492592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:44.116 [2024-07-24 19:06:21.504569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e0ea0 00:23:44.116 [2024-07-24 19:06:21.506027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.116 [2024-07-24 19:06:21.506058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:44.116 [2024-07-24 19:06:21.516696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190fc560 00:23:44.117 [2024-07-24 19:06:21.518119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.117 [2024-07-24 19:06:21.518163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:44.117 [2024-07-24 19:06:21.528710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e49b0 00:23:44.117 [2024-07-24 19:06:21.529604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.117 [2024-07-24 19:06:21.529634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:44.117 [2024-07-24 19:06:21.541265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f46d0 00:23:44.117 [2024-07-24 19:06:21.542191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.117 [2024-07-24 19:06:21.542218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:44.117 [2024-07-24 19:06:21.555668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f8a50 00:23:44.117 [2024-07-24 19:06:21.557301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.117 [2024-07-24 19:06:21.557328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:44.117 [2024-07-24 19:06:21.567643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190eff18 00:23:44.117 [2024-07-24 19:06:21.568707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.117 [2024-07-24 19:06:21.568738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:44.117 [2024-07-24 19:06:21.580158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e01f8 00:23:44.117 [2024-07-24 19:06:21.581298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.117 [2024-07-24 19:06:21.581324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:44.117 [2024-07-24 19:06:21.592046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e23b8 00:23:44.117 [2024-07-24 19:06:21.593107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.117 [2024-07-24 19:06:21.593152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:44.117 [2024-07-24 19:06:21.606230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e1710 00:23:44.117 [2024-07-24 19:06:21.607497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.117 [2024-07-24 19:06:21.607527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:44.117 [2024-07-24 19:06:21.619538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e01f8 00:23:44.117 [2024-07-24 19:06:21.620963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.117 [2024-07-24 19:06:21.620999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:44.117 [2024-07-24 19:06:21.632531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190ec840 00:23:44.117 [2024-07-24 19:06:21.633943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.117 [2024-07-24 19:06:21.633973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:44.117 [2024-07-24 19:06:21.645706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190ed920 00:23:44.117 [2024-07-24 19:06:21.647314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.117 [2024-07-24 19:06:21.647341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:44.117 [2024-07-24 19:06:21.657806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e9e10 00:23:44.117 [2024-07-24 19:06:21.659416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.117 [2024-07-24 19:06:21.659459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:44.117 [2024-07-24 19:06:21.671277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e3498 00:23:44.117 [2024-07-24 19:06:21.673064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.117 [2024-07-24 19:06:21.673095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:44.117 [2024-07-24 19:06:21.683295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f0350 00:23:44.117 [2024-07-24 19:06:21.684542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.117 [2024-07-24 19:06:21.684575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:44.117 [2024-07-24 19:06:21.695827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190eea00 00:23:44.117 [2024-07-24 19:06:21.697132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.117 [2024-07-24 19:06:21.697171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:44.117 [2024-07-24 19:06:21.707828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e23b8 00:23:44.117 [2024-07-24 19:06:21.709085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.117 [2024-07-24 19:06:21.709123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:44.375 [2024-07-24 19:06:21.722261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e2c28 00:23:44.375 [2024-07-24 19:06:21.723695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.375 [2024-07-24 19:06:21.723728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:44.375 [2024-07-24 19:06:21.735592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190eea00 00:23:44.375 [2024-07-24 19:06:21.737239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.375 [2024-07-24 19:06:21.737268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:44.375 [2024-07-24 19:06:21.747858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f6890 00:23:44.375 [2024-07-24 19:06:21.749508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.375 [2024-07-24 19:06:21.749550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:44.375 [2024-07-24 19:06:21.761007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190eaef0 00:23:44.375 [2024-07-24 19:06:21.762677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.375 [2024-07-24 19:06:21.762706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:44.375 [2024-07-24 19:06:21.773492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f1430 00:23:44.375 [2024-07-24 19:06:21.775275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.375 [2024-07-24 19:06:21.775303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:44.375 [2024-07-24 19:06:21.785935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190efae0 00:23:44.375 [2024-07-24 19:06:21.787892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.375 [2024-07-24 19:06:21.787920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:44.375 [2024-07-24 19:06:21.794390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f92c0 00:23:44.375 [2024-07-24 19:06:21.795182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.375 [2024-07-24 19:06:21.795209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:44.375 [2024-07-24 19:06:21.806504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e3d08 00:23:44.375 [2024-07-24 19:06:21.807389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.375 [2024-07-24 19:06:21.807417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:44.375 [2024-07-24 19:06:21.818409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e99d8 00:23:44.376 [2024-07-24 19:06:21.819248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.376 [2024-07-24 19:06:21.819276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:44.376 [2024-07-24 19:06:21.830507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f5be8 00:23:44.376 [2024-07-24 19:06:21.831478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.376 [2024-07-24 19:06:21.831505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:44.376 [2024-07-24 19:06:21.841705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f4b08 00:23:44.376 [2024-07-24 19:06:21.842756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.376 [2024-07-24 19:06:21.842783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:44.376 [2024-07-24 19:06:21.855022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f3e60 00:23:44.376 [2024-07-24 19:06:21.856171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.376 [2024-07-24 19:06:21.856199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:44.376 [2024-07-24 19:06:21.867196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190de8a8 00:23:44.376 [2024-07-24 19:06:21.868434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.376 [2024-07-24 19:06:21.868461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:44.376 [2024-07-24 19:06:21.879152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190df550 00:23:44.376 [2024-07-24 19:06:21.880452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.376 [2024-07-24 19:06:21.880479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:44.376 [2024-07-24 19:06:21.890969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190fc998 00:23:44.376 [2024-07-24 19:06:21.892267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.376 [2024-07-24 19:06:21.892295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:44.376 [2024-07-24 19:06:21.902846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f81e0 00:23:44.376 [2024-07-24 19:06:21.904170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.376 [2024-07-24 19:06:21.904198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:44.376 [2024-07-24 19:06:21.914959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f5378 00:23:44.376 [2024-07-24 19:06:21.916371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.376 [2024-07-24 19:06:21.916398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:44.376 [2024-07-24 19:06:21.924561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190f7100 00:23:44.376 [2024-07-24 19:06:21.925383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.376 [2024-07-24 19:06:21.925410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:44.376 [2024-07-24 19:06:21.936621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190eee38 00:23:44.376 [2024-07-24 19:06:21.937307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.376 [2024-07-24 19:06:21.937341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:44.376 [2024-07-24 19:06:21.949762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.376 [2024-07-24 19:06:21.950084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.376 [2024-07-24 19:06:21.950121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.376 [2024-07-24 19:06:21.962594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.376 [2024-07-24 19:06:21.962940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.376 [2024-07-24 19:06:21.962968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.376 [2024-07-24 19:06:21.975513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.376 [2024-07-24 19:06:21.975791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.376 [2024-07-24 19:06:21.975820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.635 [2024-07-24 19:06:21.988442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.635 [2024-07-24 19:06:21.988763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.635 [2024-07-24 19:06:21.988791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.635 [2024-07-24 19:06:22.001502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.635 [2024-07-24 19:06:22.001873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.635 [2024-07-24 19:06:22.001900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.635 [2024-07-24 19:06:22.014663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.635 [2024-07-24 19:06:22.015040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.635 [2024-07-24 19:06:22.015082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.635 [2024-07-24 19:06:22.027812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.635 [2024-07-24 19:06:22.028177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.635 [2024-07-24 19:06:22.028205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.635 [2024-07-24 19:06:22.040926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.635 [2024-07-24 19:06:22.041250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.635 [2024-07-24 19:06:22.041277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.635 [2024-07-24 19:06:22.053928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.635 [2024-07-24 19:06:22.054300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.635 [2024-07-24 19:06:22.054327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.635 [2024-07-24 19:06:22.067074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.635 [2024-07-24 19:06:22.067470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.635 [2024-07-24 19:06:22.067496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.635 [2024-07-24 19:06:22.080075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.635 [2024-07-24 19:06:22.080354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.635 [2024-07-24 19:06:22.080381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.635 [2024-07-24 19:06:22.093044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.635 [2024-07-24 19:06:22.093343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.635 [2024-07-24 19:06:22.093370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.635 [2024-07-24 19:06:22.106023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.635 [2024-07-24 19:06:22.106347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.635 [2024-07-24 19:06:22.106374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.635 [2024-07-24 19:06:22.119087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.635 [2024-07-24 19:06:22.119366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.635 [2024-07-24 19:06:22.119393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.635 [2024-07-24 19:06:22.132015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.635 [2024-07-24 19:06:22.132340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.635 [2024-07-24 19:06:22.132367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.635 [2024-07-24 19:06:22.145040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.635 [2024-07-24 19:06:22.145343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.635 [2024-07-24 19:06:22.145370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.635 [2024-07-24 19:06:22.157997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.635 [2024-07-24 19:06:22.158276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.635 [2024-07-24 19:06:22.158303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.635 [2024-07-24 19:06:22.170969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.635 [2024-07-24 19:06:22.171240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.635 [2024-07-24 19:06:22.171266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.635 [2024-07-24 19:06:22.183914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.635 [2024-07-24 19:06:22.184214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.635 [2024-07-24 19:06:22.184241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.635 [2024-07-24 19:06:22.196929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.635 [2024-07-24 19:06:22.197229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.635 [2024-07-24 19:06:22.197256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.635 [2024-07-24 19:06:22.209809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.635 [2024-07-24 19:06:22.210164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.635 [2024-07-24 19:06:22.210191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.635 [2024-07-24 19:06:22.222760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.635 [2024-07-24 19:06:22.223022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.635 [2024-07-24 19:06:22.223048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.635 [2024-07-24 19:06:22.235629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.635 [2024-07-24 19:06:22.235983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.635 [2024-07-24 19:06:22.236012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.893 [2024-07-24 19:06:22.248728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.893 [2024-07-24 19:06:22.249045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.893 [2024-07-24 19:06:22.249073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.893 [2024-07-24 19:06:22.261796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.893 [2024-07-24 19:06:22.262120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.893 [2024-07-24 19:06:22.262147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.893 [2024-07-24 19:06:22.274780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15f74f0) with pdu=0x2000190e4578 00:23:44.893 [2024-07-24 19:06:22.275127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.893 [2024-07-24 19:06:22.275160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.893 00:23:44.893 Latency(us) 00:23:44.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.893 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:44.893 nvme0n1 : 2.01 20013.55 78.18 0.00 0.00 6380.57 2633.58 15825.73 00:23:44.893 =================================================================================================================== 00:23:44.893 Total : 20013.55 78.18 0.00 0.00 6380.57 2633.58 15825.73 00:23:44.893 0 00:23:44.893 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:44.893 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:44.893 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:44.893 | .driver_specific 00:23:44.893 | .nvme_error 00:23:44.893 | .status_code 00:23:44.893 | .command_transient_transport_error' 00:23:44.893 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:45.152 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 157 > 0 )) 00:23:45.152 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3245403 00:23:45.152 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3245403 ']' 00:23:45.152 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3245403 00:23:45.152 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:23:45.152 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:45.152 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3245403 00:23:45.152 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:45.152 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:45.152 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3245403' 00:23:45.152 killing process with pid 3245403 00:23:45.152 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3245403 00:23:45.152 Received shutdown signal, test time was about 2.000000 seconds 00:23:45.152 00:23:45.152 Latency(us) 00:23:45.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.152 =================================================================================================================== 00:23:45.152 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:45.152 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3245403 00:23:45.410 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:23:45.410 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:45.410 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:23:45.410 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:23:45.410 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:23:45.410 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3245848 00:23:45.410 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:45.410 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3245848 /var/tmp/bperf.sock 00:23:45.410 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3245848 ']' 00:23:45.410 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:45.410 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:45.410 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:45.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:45.410 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:45.410 19:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:45.410 [2024-07-24 19:06:22.903177] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:23:45.410 [2024-07-24 19:06:22.903272] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3245848 ] 00:23:45.411 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:45.411 Zero copy mechanism will not be used. 00:23:45.411 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.411 [2024-07-24 19:06:22.966232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.668 [2024-07-24 19:06:23.081100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.668 19:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:45.668 19:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:45.668 19:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:45.668 19:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:45.925 19:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:45.925 19:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.925 19:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:45.925 19:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.925 19:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:45.925 19:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:46.490 nvme0n1 00:23:46.490 19:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:46.490 19:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.490 19:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:46.490 19:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.490 19:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:46.490 19:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:46.490 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:46.490 Zero copy mechanism will not be used. 00:23:46.490 Running I/O for 2 seconds... 00:23:46.490 [2024-07-24 19:06:24.022944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.490 [2024-07-24 19:06:24.023355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.490 [2024-07-24 19:06:24.023419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.490 [2024-07-24 19:06:24.037442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.490 [2024-07-24 19:06:24.037848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.490 [2024-07-24 19:06:24.037883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.490 [2024-07-24 19:06:24.050367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.490 [2024-07-24 19:06:24.050716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.490 [2024-07-24 19:06:24.050747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.490 [2024-07-24 19:06:24.063716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.490 [2024-07-24 19:06:24.064115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.490 [2024-07-24 19:06:24.064164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.490 [2024-07-24 19:06:24.076852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.490 [2024-07-24 19:06:24.077262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.490 [2024-07-24 19:06:24.077292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.490 [2024-07-24 19:06:24.089733] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.491 [2024-07-24 19:06:24.090120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-24 19:06:24.090174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.749 [2024-07-24 19:06:24.102891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.749 [2024-07-24 19:06:24.103287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.749 [2024-07-24 19:06:24.103331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.749 [2024-07-24 19:06:24.115774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.749 [2024-07-24 19:06:24.115925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.749 [2024-07-24 19:06:24.115955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.749 [2024-07-24 19:06:24.129036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.749 [2024-07-24 19:06:24.129409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.749 [2024-07-24 19:06:24.129453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.749 [2024-07-24 19:06:24.141965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.749 [2024-07-24 19:06:24.142309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.749 [2024-07-24 19:06:24.142339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.749 [2024-07-24 19:06:24.154810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.749 [2024-07-24 19:06:24.155194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.749 [2024-07-24 19:06:24.155224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.749 [2024-07-24 19:06:24.166338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.749 [2024-07-24 19:06:24.166697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.749 [2024-07-24 19:06:24.166743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.749 [2024-07-24 19:06:24.179094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.749 [2024-07-24 19:06:24.179469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.749 [2024-07-24 19:06:24.179515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.749 [2024-07-24 19:06:24.191040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.749 [2024-07-24 19:06:24.191428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.749 [2024-07-24 19:06:24.191457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.749 [2024-07-24 19:06:24.203923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.749 [2024-07-24 19:06:24.204324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.749 [2024-07-24 19:06:24.204352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.749 [2024-07-24 19:06:24.215883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.750 [2024-07-24 19:06:24.216232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.750 [2024-07-24 19:06:24.216262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.750 [2024-07-24 19:06:24.227825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.750 [2024-07-24 19:06:24.228191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.750 [2024-07-24 19:06:24.228220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.750 [2024-07-24 19:06:24.239996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.750 [2024-07-24 19:06:24.240353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.750 [2024-07-24 19:06:24.240383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.750 [2024-07-24 19:06:24.253006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.750 [2024-07-24 19:06:24.253377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.750 [2024-07-24 19:06:24.253423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.750 [2024-07-24 19:06:24.265320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.750 [2024-07-24 19:06:24.265676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.750 [2024-07-24 19:06:24.265706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.750 [2024-07-24 19:06:24.278213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.750 [2024-07-24 19:06:24.278581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.750 [2024-07-24 19:06:24.278625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.750 [2024-07-24 19:06:24.290982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.750 [2024-07-24 19:06:24.291369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.750 [2024-07-24 19:06:24.291413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.750 [2024-07-24 19:06:24.303450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.750 [2024-07-24 19:06:24.303809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.750 [2024-07-24 19:06:24.303838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.750 [2024-07-24 19:06:24.317088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.750 [2024-07-24 19:06:24.317440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.750 [2024-07-24 19:06:24.317482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.750 [2024-07-24 19:06:24.330432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.750 [2024-07-24 19:06:24.330809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.750 [2024-07-24 19:06:24.330838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.750 [2024-07-24 19:06:24.344225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:46.750 [2024-07-24 19:06:24.344593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.750 [2024-07-24 19:06:24.344627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.008 [2024-07-24 19:06:24.358436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.008 [2024-07-24 19:06:24.358778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-24 19:06:24.358822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.008 [2024-07-24 19:06:24.370486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.008 [2024-07-24 19:06:24.370825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-24 19:06:24.370868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.008 [2024-07-24 19:06:24.383224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.008 [2024-07-24 19:06:24.383593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-24 19:06:24.383639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.008 [2024-07-24 19:06:24.395960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.008 [2024-07-24 19:06:24.396332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.008 [2024-07-24 19:06:24.396377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.008 [2024-07-24 19:06:24.409434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.008 [2024-07-24 19:06:24.409810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-24 19:06:24.409854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.009 [2024-07-24 19:06:24.421348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.009 [2024-07-24 19:06:24.421537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-24 19:06:24.421566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.009 [2024-07-24 19:06:24.433836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.009 [2024-07-24 19:06:24.434108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-24 19:06:24.434137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.009 [2024-07-24 19:06:24.445525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.009 [2024-07-24 19:06:24.445914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-24 19:06:24.445945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.009 [2024-07-24 19:06:24.457118] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.009 [2024-07-24 19:06:24.457600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-24 19:06:24.457629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.009 [2024-07-24 19:06:24.468299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.009 [2024-07-24 19:06:24.468671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-24 19:06:24.468715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.009 [2024-07-24 19:06:24.478834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.009 [2024-07-24 19:06:24.479293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-24 19:06:24.479338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.009 [2024-07-24 19:06:24.490660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.009 [2024-07-24 19:06:24.491067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-24 19:06:24.491097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.009 [2024-07-24 19:06:24.501559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.009 [2024-07-24 19:06:24.501919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-24 19:06:24.501948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.009 [2024-07-24 19:06:24.512801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.009 [2024-07-24 19:06:24.513210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-24 19:06:24.513240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.009 [2024-07-24 19:06:24.524324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.009 [2024-07-24 19:06:24.524645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-24 19:06:24.524675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.009 [2024-07-24 19:06:24.535250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.009 [2024-07-24 19:06:24.535564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-24 19:06:24.535600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.009 [2024-07-24 19:06:24.546371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.009 [2024-07-24 19:06:24.546806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-24 19:06:24.546838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.009 [2024-07-24 19:06:24.557767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.009 [2024-07-24 19:06:24.558177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-24 19:06:24.558208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.009 [2024-07-24 19:06:24.569551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.009 [2024-07-24 19:06:24.569858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-24 19:06:24.569889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.009 [2024-07-24 19:06:24.580721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.009 [2024-07-24 19:06:24.581087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-24 19:06:24.581125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.009 [2024-07-24 19:06:24.591388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.009 [2024-07-24 19:06:24.591751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-24 19:06:24.591795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.009 [2024-07-24 19:06:24.602275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.009 [2024-07-24 19:06:24.602671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.009 [2024-07-24 19:06:24.602702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.269 [2024-07-24 19:06:24.613501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.269 [2024-07-24 19:06:24.613916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.269 [2024-07-24 19:06:24.613959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.269 [2024-07-24 19:06:24.624469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.269 [2024-07-24 19:06:24.624850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.269 [2024-07-24 19:06:24.624880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.269 [2024-07-24 19:06:24.635290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.269 [2024-07-24 19:06:24.635687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.269 [2024-07-24 19:06:24.635734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.269 [2024-07-24 19:06:24.646662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.269 [2024-07-24 19:06:24.647013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.269 [2024-07-24 19:06:24.647043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.269 [2024-07-24 19:06:24.657043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.269 [2024-07-24 19:06:24.657532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.269 [2024-07-24 19:06:24.657564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.269 [2024-07-24 19:06:24.669009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.269 [2024-07-24 19:06:24.669492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.269 [2024-07-24 19:06:24.669534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.269 [2024-07-24 19:06:24.680437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.269 [2024-07-24 19:06:24.680838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.269 [2024-07-24 19:06:24.680871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.269 [2024-07-24 19:06:24.691379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.269 [2024-07-24 19:06:24.691819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.269 [2024-07-24 19:06:24.691848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.269 [2024-07-24 19:06:24.702599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.269 [2024-07-24 19:06:24.702931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.269 [2024-07-24 19:06:24.702961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.269 [2024-07-24 19:06:24.714221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.269 [2024-07-24 19:06:24.714696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.269 [2024-07-24 19:06:24.714739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.269 [2024-07-24 19:06:24.726061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.269 [2024-07-24 19:06:24.726570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.269 [2024-07-24 19:06:24.726627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.269 [2024-07-24 19:06:24.738184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.269 [2024-07-24 19:06:24.738541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.269 [2024-07-24 19:06:24.738585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.269 [2024-07-24 19:06:24.748892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.269 [2024-07-24 19:06:24.749309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.269 [2024-07-24 19:06:24.749339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.269 [2024-07-24 19:06:24.760210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.269 [2024-07-24 19:06:24.760621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.269 [2024-07-24 19:06:24.760669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.269 [2024-07-24 19:06:24.771597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.269 [2024-07-24 19:06:24.772080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.269 [2024-07-24 19:06:24.772118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.269 [2024-07-24 19:06:24.782705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.269 [2024-07-24 19:06:24.783171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.269 [2024-07-24 19:06:24.783220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.269 [2024-07-24 19:06:24.794131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.269 [2024-07-24 19:06:24.794543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.270 [2024-07-24 19:06:24.794573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.270 [2024-07-24 19:06:24.805728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.270 [2024-07-24 19:06:24.806200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.270 [2024-07-24 19:06:24.806234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.270 [2024-07-24 19:06:24.817919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.270 [2024-07-24 19:06:24.818366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.270 [2024-07-24 19:06:24.818396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.270 [2024-07-24 19:06:24.829783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.270 [2024-07-24 19:06:24.830229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.270 [2024-07-24 19:06:24.830259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.270 [2024-07-24 19:06:24.842731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.270 [2024-07-24 19:06:24.843084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.270 [2024-07-24 19:06:24.843131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.270 [2024-07-24 19:06:24.853628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.270 [2024-07-24 19:06:24.853988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.270 [2024-07-24 19:06:24.854018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.270 [2024-07-24 19:06:24.865653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.270 [2024-07-24 19:06:24.865978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.270 [2024-07-24 19:06:24.866009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.528 [2024-07-24 19:06:24.877825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.528 [2024-07-24 19:06:24.878278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.528 [2024-07-24 19:06:24.878309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.528 [2024-07-24 19:06:24.890153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.528 [2024-07-24 19:06:24.890525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.528 [2024-07-24 19:06:24.890555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.528 [2024-07-24 19:06:24.901842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.528 [2024-07-24 19:06:24.902279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.528 [2024-07-24 19:06:24.902310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.528 [2024-07-24 19:06:24.913834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.528 [2024-07-24 19:06:24.914279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.528 [2024-07-24 19:06:24.914323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.528 [2024-07-24 19:06:24.925746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.528 [2024-07-24 19:06:24.926148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.528 [2024-07-24 19:06:24.926196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.528 [2024-07-24 19:06:24.937465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.528 [2024-07-24 19:06:24.937687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.528 [2024-07-24 19:06:24.937717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.528 [2024-07-24 19:06:24.949463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.528 [2024-07-24 19:06:24.949935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.528 [2024-07-24 19:06:24.949964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.528 [2024-07-24 19:06:24.961198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.528 [2024-07-24 19:06:24.961632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.528 [2024-07-24 19:06:24.961683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.528 [2024-07-24 19:06:24.972651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.528 [2024-07-24 19:06:24.973078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.528 [2024-07-24 19:06:24.973118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.528 [2024-07-24 19:06:24.984379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.528 [2024-07-24 19:06:24.984746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.528 [2024-07-24 19:06:24.984801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.528 [2024-07-24 19:06:24.995941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.528 [2024-07-24 19:06:24.996397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.528 [2024-07-24 19:06:24.996441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.528 [2024-07-24 19:06:25.008054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.528 [2024-07-24 19:06:25.008452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.528 [2024-07-24 19:06:25.008497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.528 [2024-07-24 19:06:25.020094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.528 [2024-07-24 19:06:25.020471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.528 [2024-07-24 19:06:25.020518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.528 [2024-07-24 19:06:25.032046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.528 [2024-07-24 19:06:25.032391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.528 [2024-07-24 19:06:25.032448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.528 [2024-07-24 19:06:25.043419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.528 [2024-07-24 19:06:25.043737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.528 [2024-07-24 19:06:25.043795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.528 [2024-07-24 19:06:25.055110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.528 [2024-07-24 19:06:25.055518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.529 [2024-07-24 19:06:25.055603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.529 [2024-07-24 19:06:25.066865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.529 [2024-07-24 19:06:25.067308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.529 [2024-07-24 19:06:25.067344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.529 [2024-07-24 19:06:25.078458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.529 [2024-07-24 19:06:25.078840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.529 [2024-07-24 19:06:25.078887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.529 [2024-07-24 19:06:25.090218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.529 [2024-07-24 19:06:25.090582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.529 [2024-07-24 19:06:25.090652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.529 [2024-07-24 19:06:25.102005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.529 [2024-07-24 19:06:25.102318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.529 [2024-07-24 19:06:25.102411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.529 [2024-07-24 19:06:25.113498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.529 [2024-07-24 19:06:25.113900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.529 [2024-07-24 19:06:25.113934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.529 [2024-07-24 19:06:25.125284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.529 [2024-07-24 19:06:25.125738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.529 [2024-07-24 19:06:25.125783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.787 [2024-07-24 19:06:25.135690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.787 [2024-07-24 19:06:25.136255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.787 [2024-07-24 19:06:25.136285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.787 [2024-07-24 19:06:25.148150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.787 [2024-07-24 19:06:25.148543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.787 [2024-07-24 19:06:25.148581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.787 [2024-07-24 19:06:25.158774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.787 [2024-07-24 19:06:25.159202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.787 [2024-07-24 19:06:25.159232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.787 [2024-07-24 19:06:25.170472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.787 [2024-07-24 19:06:25.170889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.787 [2024-07-24 19:06:25.170918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.787 [2024-07-24 19:06:25.183051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.787 [2024-07-24 19:06:25.183449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.787 [2024-07-24 19:06:25.183482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.787 [2024-07-24 19:06:25.195213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.787 [2024-07-24 19:06:25.195708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.787 [2024-07-24 19:06:25.195757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.787 [2024-07-24 19:06:25.207524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.787 [2024-07-24 19:06:25.207851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.787 [2024-07-24 19:06:25.207910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.788 [2024-07-24 19:06:25.218990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.788 [2024-07-24 19:06:25.219293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.788 [2024-07-24 19:06:25.219327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.788 [2024-07-24 19:06:25.229856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.788 [2024-07-24 19:06:25.230292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.788 [2024-07-24 19:06:25.230336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.788 [2024-07-24 19:06:25.242581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.788 [2024-07-24 19:06:25.242969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.788 [2024-07-24 19:06:25.243013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.788 [2024-07-24 19:06:25.254629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.788 [2024-07-24 19:06:25.255023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.788 [2024-07-24 19:06:25.255057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.788 [2024-07-24 19:06:25.266152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.788 [2024-07-24 19:06:25.266630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.788 [2024-07-24 19:06:25.266661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.788 [2024-07-24 19:06:25.277435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.788 [2024-07-24 19:06:25.277803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.788 [2024-07-24 19:06:25.277833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.788 [2024-07-24 19:06:25.289365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.788 [2024-07-24 19:06:25.289826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.788 [2024-07-24 19:06:25.289857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.788 [2024-07-24 19:06:25.301901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.788 [2024-07-24 19:06:25.302255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.788 [2024-07-24 19:06:25.302300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.788 [2024-07-24 19:06:25.313575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.788 [2024-07-24 19:06:25.313927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.788 [2024-07-24 19:06:25.313956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.788 [2024-07-24 19:06:25.325480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.788 [2024-07-24 19:06:25.325847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.788 [2024-07-24 19:06:25.325880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.788 [2024-07-24 19:06:25.336769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.788 [2024-07-24 19:06:25.337010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.788 [2024-07-24 19:06:25.337094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.788 [2024-07-24 19:06:25.348268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.788 [2024-07-24 19:06:25.348538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.788 [2024-07-24 19:06:25.348590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.788 [2024-07-24 19:06:25.359576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.788 [2024-07-24 19:06:25.359967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.788 [2024-07-24 19:06:25.360015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.788 [2024-07-24 19:06:25.370741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.788 [2024-07-24 19:06:25.371121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.788 [2024-07-24 19:06:25.371151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.788 [2024-07-24 19:06:25.382157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:47.788 [2024-07-24 19:06:25.382652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.788 [2024-07-24 19:06:25.382683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.046 [2024-07-24 19:06:25.394176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.046 [2024-07-24 19:06:25.394513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.046 [2024-07-24 19:06:25.394557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.046 [2024-07-24 19:06:25.405667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.046 [2024-07-24 19:06:25.406139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.046 [2024-07-24 19:06:25.406173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.046 [2024-07-24 19:06:25.416950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.046 [2024-07-24 19:06:25.417356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.046 [2024-07-24 19:06:25.417401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.046 [2024-07-24 19:06:25.428636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.046 [2024-07-24 19:06:25.429035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.046 [2024-07-24 19:06:25.429068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.046 [2024-07-24 19:06:25.441197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.046 [2024-07-24 19:06:25.441590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.046 [2024-07-24 19:06:25.441634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.046 [2024-07-24 19:06:25.452277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.046 [2024-07-24 19:06:25.452632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.046 [2024-07-24 19:06:25.452684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.046 [2024-07-24 19:06:25.464173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.046 [2024-07-24 19:06:25.464624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.046 [2024-07-24 19:06:25.464683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.046 [2024-07-24 19:06:25.475398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.046 [2024-07-24 19:06:25.475894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.046 [2024-07-24 19:06:25.475927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.046 [2024-07-24 19:06:25.487461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.046 [2024-07-24 19:06:25.487917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.046 [2024-07-24 19:06:25.487947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.046 [2024-07-24 19:06:25.499680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.046 [2024-07-24 19:06:25.499930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.046 [2024-07-24 19:06:25.499960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.046 [2024-07-24 19:06:25.510738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.046 [2024-07-24 19:06:25.511120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.046 [2024-07-24 19:06:25.511150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.046 [2024-07-24 19:06:25.522265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.046 [2024-07-24 19:06:25.522561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.046 [2024-07-24 19:06:25.522609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.046 [2024-07-24 19:06:25.533956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.046 [2024-07-24 19:06:25.534337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.046 [2024-07-24 19:06:25.534379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.046 [2024-07-24 19:06:25.545926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.046 [2024-07-24 19:06:25.546300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.046 [2024-07-24 19:06:25.546330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.046 [2024-07-24 19:06:25.557053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.046 [2024-07-24 19:06:25.557462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.046 [2024-07-24 19:06:25.557493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.046 [2024-07-24 19:06:25.569050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.046 [2024-07-24 19:06:25.569350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.046 [2024-07-24 19:06:25.569381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.046 [2024-07-24 19:06:25.580942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.046 [2024-07-24 19:06:25.581311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.046 [2024-07-24 19:06:25.581371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.046 [2024-07-24 19:06:25.592366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.046 [2024-07-24 19:06:25.592674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.046 [2024-07-24 19:06:25.592735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.046 [2024-07-24 19:06:25.603808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.046 [2024-07-24 19:06:25.604108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.046 [2024-07-24 19:06:25.604153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.046 [2024-07-24 19:06:25.615954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.046 [2024-07-24 19:06:25.616265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.046 [2024-07-24 19:06:25.616298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.046 [2024-07-24 19:06:25.627305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.046 [2024-07-24 19:06:25.627553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.046 [2024-07-24 19:06:25.627583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.046 [2024-07-24 19:06:25.639199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.046 [2024-07-24 19:06:25.639656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.046 [2024-07-24 19:06:25.639712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.346 [2024-07-24 19:06:25.650342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.346 [2024-07-24 19:06:25.650733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.346 [2024-07-24 19:06:25.650771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.346 [2024-07-24 19:06:25.663370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.346 [2024-07-24 19:06:25.663830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.346 [2024-07-24 19:06:25.663862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.346 [2024-07-24 19:06:25.676128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.346 [2024-07-24 19:06:25.676519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.346 [2024-07-24 19:06:25.676549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.346 [2024-07-24 19:06:25.688749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.346 [2024-07-24 19:06:25.689253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.346 [2024-07-24 19:06:25.689299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.346 [2024-07-24 19:06:25.699885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.346 [2024-07-24 19:06:25.700383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.346 [2024-07-24 19:06:25.700413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.346 [2024-07-24 19:06:25.712169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.346 [2024-07-24 19:06:25.712508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.346 [2024-07-24 19:06:25.712552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.346 [2024-07-24 19:06:25.723932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.346 [2024-07-24 19:06:25.724300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.346 [2024-07-24 19:06:25.724334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.346 [2024-07-24 19:06:25.735770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.346 [2024-07-24 19:06:25.736177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.346 [2024-07-24 19:06:25.736208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.346 [2024-07-24 19:06:25.746134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.346 [2024-07-24 19:06:25.746540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.346 [2024-07-24 19:06:25.746584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.346 [2024-07-24 19:06:25.757171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.346 [2024-07-24 19:06:25.757548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.346 [2024-07-24 19:06:25.757590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.346 [2024-07-24 19:06:25.768754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.346 [2024-07-24 19:06:25.769151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.346 [2024-07-24 19:06:25.769181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.346 [2024-07-24 19:06:25.781231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.346 [2024-07-24 19:06:25.781664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.346 [2024-07-24 19:06:25.781712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.346 [2024-07-24 19:06:25.792692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.346 [2024-07-24 19:06:25.793053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.346 [2024-07-24 19:06:25.793112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.346 [2024-07-24 19:06:25.804710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.346 [2024-07-24 19:06:25.805154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.346 [2024-07-24 19:06:25.805200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.346 [2024-07-24 19:06:25.816156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.346 [2024-07-24 19:06:25.816568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.346 [2024-07-24 19:06:25.816620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.346 [2024-07-24 19:06:25.827529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.346 [2024-07-24 19:06:25.827899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.346 [2024-07-24 19:06:25.827932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.346 [2024-07-24 19:06:25.839354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.346 [2024-07-24 19:06:25.839775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.346 [2024-07-24 19:06:25.839805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.346 [2024-07-24 19:06:25.851033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.346 [2024-07-24 19:06:25.851357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.346 [2024-07-24 19:06:25.851434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.346 [2024-07-24 19:06:25.861284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.346 [2024-07-24 19:06:25.861673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.346 [2024-07-24 19:06:25.861703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.346 [2024-07-24 19:06:25.872516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.346 [2024-07-24 19:06:25.872852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.346 [2024-07-24 19:06:25.872882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.346 [2024-07-24 19:06:25.884081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.346 [2024-07-24 19:06:25.884319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.346 [2024-07-24 19:06:25.884396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.346 [2024-07-24 19:06:25.895172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.347 [2024-07-24 19:06:25.895547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.347 [2024-07-24 19:06:25.895577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.347 [2024-07-24 19:06:25.907587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.347 [2024-07-24 19:06:25.908036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.347 [2024-07-24 19:06:25.908086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.347 [2024-07-24 19:06:25.919395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.347 [2024-07-24 19:06:25.919755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.347 [2024-07-24 19:06:25.919786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.347 [2024-07-24 19:06:25.930529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.347 [2024-07-24 19:06:25.930931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.347 [2024-07-24 19:06:25.930974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.627 [2024-07-24 19:06:25.941306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.627 [2024-07-24 19:06:25.941577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.627 [2024-07-24 19:06:25.941627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.627 [2024-07-24 19:06:25.952669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.627 [2024-07-24 19:06:25.953058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.627 [2024-07-24 19:06:25.953109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.627 [2024-07-24 19:06:25.964422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.627 [2024-07-24 19:06:25.964873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.627 [2024-07-24 19:06:25.964907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.627 [2024-07-24 19:06:25.976703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.627 [2024-07-24 19:06:25.977091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.627 [2024-07-24 19:06:25.977158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.627 [2024-07-24 19:06:25.987871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.627 [2024-07-24 19:06:25.988262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.627 [2024-07-24 19:06:25.988306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.627 [2024-07-24 19:06:25.999273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.627 [2024-07-24 19:06:25.999618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.627 [2024-07-24 19:06:25.999666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.627 [2024-07-24 19:06:26.010158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x142caf0) with pdu=0x2000190fef90 00:23:48.627 [2024-07-24 19:06:26.010545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.627 [2024-07-24 19:06:26.010575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.627 00:23:48.627 Latency(us) 00:23:48.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.627 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:48.627 nvme0n1 : 2.01 2620.07 327.51 0.00 0.00 6092.94 4490.43 14272.28 00:23:48.627 =================================================================================================================== 00:23:48.627 Total : 2620.07 327.51 0.00 0.00 6092.94 4490.43 14272.28 00:23:48.627 0 00:23:48.627 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:48.627 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:48.627 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:48.627 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:48.627 | .driver_specific 00:23:48.627 | .nvme_error 00:23:48.627 | .status_code 00:23:48.627 | .command_transient_transport_error' 00:23:48.886 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 169 > 0 )) 00:23:48.886 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3245848 00:23:48.886 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3245848 ']' 00:23:48.886 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3245848 00:23:48.886 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:23:48.886 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:48.886 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3245848 00:23:48.886 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:48.886 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:48.886 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3245848' 00:23:48.886 killing process with pid 3245848 00:23:48.886 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3245848 00:23:48.886 Received shutdown signal, test time was about 2.000000 seconds 00:23:48.886 00:23:48.886 Latency(us) 00:23:48.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.886 =================================================================================================================== 00:23:48.886 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.886 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3245848 00:23:49.144 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3244444 00:23:49.144 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3244444 ']' 00:23:49.144 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3244444 00:23:49.144 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:23:49.144 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:49.144 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3244444 00:23:49.144 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:49.144 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:49.144 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3244444' 00:23:49.144 killing process with pid 3244444 00:23:49.144 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3244444 00:23:49.144 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3244444 00:23:49.402 00:23:49.402 real 0m15.541s 00:23:49.402 user 0m30.295s 00:23:49.402 sys 0m4.346s 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:49.403 ************************************ 00:23:49.403 END TEST nvmf_digest_error 00:23:49.403 ************************************ 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:49.403 rmmod nvme_tcp 00:23:49.403 rmmod nvme_fabrics 00:23:49.403 rmmod nvme_keyring 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3244444 ']' 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3244444 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 3244444 ']' 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 3244444 00:23:49.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3244444) - No such process 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 3244444 is not found' 00:23:49.403 Process with pid 3244444 is not found 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.403 19:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:51.934 00:23:51.934 real 0m35.418s 00:23:51.934 user 1m1.096s 00:23:51.934 sys 0m10.203s 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:51.934 ************************************ 00:23:51.934 END TEST nvmf_digest 00:23:51.934 ************************************ 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.934 ************************************ 00:23:51.934 START TEST nvmf_bdevperf 00:23:51.934 ************************************ 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:23:51.934 * Looking for test storage... 00:23:51.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.934 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:51.935 19:06:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:53.836 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:53.836 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:53.837 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:53.837 Found net devices under 0000:09:00.0: cvl_0_0 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:53.837 Found net devices under 0000:09:00.1: cvl_0_1 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:53.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:23:53.837 00:23:53.837 --- 10.0.0.2 ping statistics --- 00:23:53.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.837 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:23:53.837 00:23:53.837 --- 10.0.0.1 ping statistics --- 00:23:53.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.837 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3248301 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3248301 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3248301 ']' 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:53.837 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:53.837 [2024-07-24 19:06:31.314220] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:23:53.837 [2024-07-24 19:06:31.314301] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.837 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.837 [2024-07-24 19:06:31.380537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:54.095 [2024-07-24 19:06:31.486940] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.095 [2024-07-24 19:06:31.486991] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.095 [2024-07-24 19:06:31.487018] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.095 [2024-07-24 19:06:31.487029] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.095 [2024-07-24 19:06:31.487039] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.095 [2024-07-24 19:06:31.487112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.095 [2024-07-24 19:06:31.487247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:54.095 [2024-07-24 19:06:31.487252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.095 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:54.095 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:23:54.095 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:54.096 [2024-07-24 19:06:31.619253] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:54.096 Malloc0 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:54.096 [2024-07-24 19:06:31.685705] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:54.096 { 00:23:54.096 "params": { 00:23:54.096 "name": "Nvme$subsystem", 00:23:54.096 "trtype": "$TEST_TRANSPORT", 00:23:54.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.096 "adrfam": "ipv4", 00:23:54.096 "trsvcid": "$NVMF_PORT", 00:23:54.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.096 "hdgst": ${hdgst:-false}, 00:23:54.096 "ddgst": ${ddgst:-false} 00:23:54.096 }, 00:23:54.096 "method": "bdev_nvme_attach_controller" 00:23:54.096 } 00:23:54.096 EOF 00:23:54.096 )") 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:23:54.096 19:06:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:54.096 "params": { 00:23:54.096 "name": "Nvme1", 00:23:54.096 "trtype": "tcp", 00:23:54.096 "traddr": "10.0.0.2", 00:23:54.096 "adrfam": "ipv4", 00:23:54.096 "trsvcid": "4420", 00:23:54.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.096 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:54.096 "hdgst": false, 00:23:54.096 "ddgst": false 00:23:54.096 }, 00:23:54.096 "method": "bdev_nvme_attach_controller" 00:23:54.096 }' 00:23:54.354 [2024-07-24 19:06:31.730881] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:23:54.354 [2024-07-24 19:06:31.730952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3248323 ] 00:23:54.354 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.354 [2024-07-24 19:06:31.789860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.354 [2024-07-24 19:06:31.900194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.612 Running I/O for 1 seconds... 00:23:55.546 00:23:55.546 Latency(us) 00:23:55.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.546 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:55.546 Verification LBA range: start 0x0 length 0x4000 00:23:55.546 Nvme1n1 : 1.01 8535.89 33.34 0.00 0.00 14931.64 1189.36 15146.10 00:23:55.546 =================================================================================================================== 00:23:55.546 Total : 8535.89 33.34 0.00 0.00 14931.64 1189.36 15146.10 00:23:55.803 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3248588 00:23:55.803 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:23:55.803 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:23:55.803 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:23:55.803 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:23:55.803 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:23:55.803 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:55.803 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:55.803 { 00:23:55.803 "params": { 00:23:55.803 "name": "Nvme$subsystem", 00:23:55.803 "trtype": "$TEST_TRANSPORT", 00:23:55.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:55.803 "adrfam": "ipv4", 00:23:55.803 "trsvcid": "$NVMF_PORT", 00:23:55.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:55.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:55.803 "hdgst": ${hdgst:-false}, 00:23:55.803 "ddgst": ${ddgst:-false} 00:23:55.803 }, 00:23:55.803 "method": "bdev_nvme_attach_controller" 00:23:55.803 } 00:23:55.803 EOF 00:23:55.803 )") 00:23:55.803 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:23:55.804 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:23:55.804 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:23:55.804 19:06:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:55.804 "params": { 00:23:55.804 "name": "Nvme1", 00:23:55.804 "trtype": "tcp", 00:23:55.804 "traddr": "10.0.0.2", 00:23:55.804 "adrfam": "ipv4", 00:23:55.804 "trsvcid": "4420", 00:23:55.804 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.804 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:55.804 "hdgst": false, 00:23:55.804 "ddgst": false 00:23:55.804 }, 00:23:55.804 "method": "bdev_nvme_attach_controller" 00:23:55.804 }' 00:23:55.804 [2024-07-24 19:06:33.384878] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:23:55.804 [2024-07-24 19:06:33.384956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3248588 ] 00:23:56.060 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.060 [2024-07-24 19:06:33.443722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.060 [2024-07-24 19:06:33.551013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.317 Running I/O for 15 seconds... 00:23:58.849 19:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3248301 00:23:58.849 19:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:23:58.849 [2024-07-24 19:06:36.358039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.849 [2024-07-24 19:06:36.358091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.849 [2024-07-24 19:06:36.358142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.849 [2024-07-24 19:06:36.358179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.849 [2024-07-24 19:06:36.358198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.849 [2024-07-24 19:06:36.358215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.849 [2024-07-24 19:06:36.358231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.849 [2024-07-24 19:06:36.358246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.849 [2024-07-24 19:06:36.358261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.849 [2024-07-24 19:06:36.358275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.849 [2024-07-24 19:06:36.358290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.849 [2024-07-24 19:06:36.358305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.849 [2024-07-24 19:06:36.358321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.849 [2024-07-24 19:06:36.358337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.849 [2024-07-24 19:06:36.358352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.849 [2024-07-24 19:06:36.358368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.849 [2024-07-24 19:06:36.358400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.849 [2024-07-24 19:06:36.358413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.849 [2024-07-24 19:06:36.358428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.849 [2024-07-24 19:06:36.358460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.849 [2024-07-24 19:06:36.358477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.849 [2024-07-24 19:06:36.358493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.849 [2024-07-24 19:06:36.358511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.849 [2024-07-24 19:06:36.358529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.849 [2024-07-24 19:06:36.358548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.849 [2024-07-24 19:06:36.358572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.849 [2024-07-24 19:06:36.358593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.849 [2024-07-24 19:06:36.358611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.849 [2024-07-24 19:06:36.358631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.849 [2024-07-24 19:06:36.358648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.849 [2024-07-24 19:06:36.358665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.849 [2024-07-24 19:06:36.358680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.849 [2024-07-24 19:06:36.358696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.849 [2024-07-24 19:06:36.358711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.849 [2024-07-24 19:06:36.358728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.849 [2024-07-24 19:06:36.358743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.849 [2024-07-24 19:06:36.358759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.849 [2024-07-24 19:06:36.358774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.849 [2024-07-24 19:06:36.358791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.849 [2024-07-24 19:06:36.358807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.849 [2024-07-24 19:06:36.358823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.849 [2024-07-24 19:06:36.358838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.358855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.358870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.358886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.358901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.358920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.358934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.358951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.358965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.358982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.850 [2024-07-24 19:06:36.359506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.850 [2024-07-24 19:06:36.359538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.850 [2024-07-24 19:06:36.359570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.850 [2024-07-24 19:06:36.359602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.850 [2024-07-24 19:06:36.359633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.850 [2024-07-24 19:06:36.359665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.850 [2024-07-24 19:06:36.359696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.359982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.359999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.360014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.360030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.360045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.360061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.360076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.360092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.850 [2024-07-24 19:06:36.360114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.850 [2024-07-24 19:06:36.360146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.360979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.360994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.361011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.361026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.361043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.361062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.361079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.361094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.361118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.361134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.361167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.361181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.361196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.361209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.361224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.361237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.361252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.361266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.361280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.361294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.361308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.361322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.361337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.361350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.361365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.851 [2024-07-24 19:06:36.361393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 19:06:36.361412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.851 [2024-07-24 19:06:36.361432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.361450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.361465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.361486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.361502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.361518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.361534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.361550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.361566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.361582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.361597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.361614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.361629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.361645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.361660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.361677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.361692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.361708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.361723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.361740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.361754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.361771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.361786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.361803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.361819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.361835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.361850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.361867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.361882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.361902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.361918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.361935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.852 [2024-07-24 19:06:36.361951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.361967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.361983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.361999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.362014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.362030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.362045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.362062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.362077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.362094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.362117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.362151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.362166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.362182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.362195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.362210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.362223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.362239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.362253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.362267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.362280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.362295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 19:06:36.362312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.362327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c7830 is same with the state(5) to be set 00:23:58.852 [2024-07-24 19:06:36.362343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.852 [2024-07-24 19:06:36.362354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.852 [2024-07-24 19:06:36.362365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47008 len:8 PRP1 0x0 PRP2 0x0 00:23:58.852 [2024-07-24 19:06:36.362377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 19:06:36.362463] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23c7830 was disconnected and freed. reset controller. 00:23:58.852 [2024-07-24 19:06:36.366257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.852 [2024-07-24 19:06:36.366330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:58.852 [2024-07-24 19:06:36.367046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.852 [2024-07-24 19:06:36.367078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:58.852 [2024-07-24 19:06:36.367096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:58.852 [2024-07-24 19:06:36.367350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:58.852 [2024-07-24 19:06:36.367593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.852 [2024-07-24 19:06:36.367617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.852 [2024-07-24 19:06:36.367634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.852 [2024-07-24 19:06:36.371213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.852 [2024-07-24 19:06:36.380475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.852 [2024-07-24 19:06:36.380895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.852 [2024-07-24 19:06:36.380927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:58.852 [2024-07-24 19:06:36.380944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:58.852 [2024-07-24 19:06:36.381195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:58.852 [2024-07-24 19:06:36.381438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.852 [2024-07-24 19:06:36.381462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.852 [2024-07-24 19:06:36.381476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.852 [2024-07-24 19:06:36.385040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.852 [2024-07-24 19:06:36.394321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.852 [2024-07-24 19:06:36.394758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.852 [2024-07-24 19:06:36.394790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:58.853 [2024-07-24 19:06:36.394807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:58.853 [2024-07-24 19:06:36.395051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:58.853 [2024-07-24 19:06:36.395304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.853 [2024-07-24 19:06:36.395328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.853 [2024-07-24 19:06:36.395343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.853 [2024-07-24 19:06:36.398911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.853 [2024-07-24 19:06:36.408189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.853 [2024-07-24 19:06:36.408687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.853 [2024-07-24 19:06:36.408729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:58.853 [2024-07-24 19:06:36.408746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:58.853 [2024-07-24 19:06:36.409004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:58.853 [2024-07-24 19:06:36.409258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.853 [2024-07-24 19:06:36.409282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.853 [2024-07-24 19:06:36.409297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.853 [2024-07-24 19:06:36.412867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.853 [2024-07-24 19:06:36.422168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.853 [2024-07-24 19:06:36.422618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.853 [2024-07-24 19:06:36.422659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:58.853 [2024-07-24 19:06:36.422675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:58.853 [2024-07-24 19:06:36.422916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:58.853 [2024-07-24 19:06:36.423171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.853 [2024-07-24 19:06:36.423196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.853 [2024-07-24 19:06:36.423210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.853 [2024-07-24 19:06:36.426782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.853 [2024-07-24 19:06:36.436072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.853 [2024-07-24 19:06:36.436524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.853 [2024-07-24 19:06:36.436555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:58.853 [2024-07-24 19:06:36.436573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:58.853 [2024-07-24 19:06:36.436812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:58.853 [2024-07-24 19:06:36.437054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.853 [2024-07-24 19:06:36.437077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.853 [2024-07-24 19:06:36.437098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.853 [2024-07-24 19:06:36.440692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.115 [2024-07-24 19:06:36.450016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.115 [2024-07-24 19:06:36.450439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.115 [2024-07-24 19:06:36.450472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.115 [2024-07-24 19:06:36.450489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.115 [2024-07-24 19:06:36.450727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.115 [2024-07-24 19:06:36.450969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.115 [2024-07-24 19:06:36.450992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.115 [2024-07-24 19:06:36.451007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.115 [2024-07-24 19:06:36.454592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.115 [2024-07-24 19:06:36.463875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.115 [2024-07-24 19:06:36.464326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.115 [2024-07-24 19:06:36.464357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.115 [2024-07-24 19:06:36.464375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.115 [2024-07-24 19:06:36.464612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.115 [2024-07-24 19:06:36.464855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.115 [2024-07-24 19:06:36.464878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.115 [2024-07-24 19:06:36.464893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.115 [2024-07-24 19:06:36.468477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.115 [2024-07-24 19:06:36.477759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.115 [2024-07-24 19:06:36.478212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.115 [2024-07-24 19:06:36.478244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.115 [2024-07-24 19:06:36.478261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.115 [2024-07-24 19:06:36.478499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.115 [2024-07-24 19:06:36.478741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.115 [2024-07-24 19:06:36.478765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.115 [2024-07-24 19:06:36.478780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.115 [2024-07-24 19:06:36.482363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.115 [2024-07-24 19:06:36.491647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.115 [2024-07-24 19:06:36.492113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.115 [2024-07-24 19:06:36.492145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.115 [2024-07-24 19:06:36.492162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.115 [2024-07-24 19:06:36.492400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.115 [2024-07-24 19:06:36.492642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.115 [2024-07-24 19:06:36.492665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.115 [2024-07-24 19:06:36.492679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.115 [2024-07-24 19:06:36.496271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.115 [2024-07-24 19:06:36.505546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.115 [2024-07-24 19:06:36.505981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.115 [2024-07-24 19:06:36.506011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.115 [2024-07-24 19:06:36.506029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.115 [2024-07-24 19:06:36.506276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.115 [2024-07-24 19:06:36.506519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.115 [2024-07-24 19:06:36.506542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.115 [2024-07-24 19:06:36.506557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.115 [2024-07-24 19:06:36.510140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.115 [2024-07-24 19:06:36.519407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.115 [2024-07-24 19:06:36.519854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.115 [2024-07-24 19:06:36.519884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.115 [2024-07-24 19:06:36.519901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.115 [2024-07-24 19:06:36.520159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.115 [2024-07-24 19:06:36.520403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.115 [2024-07-24 19:06:36.520433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.115 [2024-07-24 19:06:36.520448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.115 [2024-07-24 19:06:36.524011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.115 [2024-07-24 19:06:36.533291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.115 [2024-07-24 19:06:36.533722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.115 [2024-07-24 19:06:36.533753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.115 [2024-07-24 19:06:36.533770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.115 [2024-07-24 19:06:36.534013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.115 [2024-07-24 19:06:36.534264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.115 [2024-07-24 19:06:36.534288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.115 [2024-07-24 19:06:36.534303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.115 [2024-07-24 19:06:36.537868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.115 [2024-07-24 19:06:36.547166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.115 [2024-07-24 19:06:36.547576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.115 [2024-07-24 19:06:36.547607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.115 [2024-07-24 19:06:36.547624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.115 [2024-07-24 19:06:36.547862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.115 [2024-07-24 19:06:36.548113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.115 [2024-07-24 19:06:36.548137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.115 [2024-07-24 19:06:36.548152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.115 [2024-07-24 19:06:36.551720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.116 [2024-07-24 19:06:36.561009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.116 [2024-07-24 19:06:36.561466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.116 [2024-07-24 19:06:36.561497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.116 [2024-07-24 19:06:36.561515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.116 [2024-07-24 19:06:36.561752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.116 [2024-07-24 19:06:36.561996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.116 [2024-07-24 19:06:36.562019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.116 [2024-07-24 19:06:36.562034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.116 [2024-07-24 19:06:36.565614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.116 [2024-07-24 19:06:36.574892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.116 [2024-07-24 19:06:36.575344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.116 [2024-07-24 19:06:36.575376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.116 [2024-07-24 19:06:36.575393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.116 [2024-07-24 19:06:36.575631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.116 [2024-07-24 19:06:36.575874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.116 [2024-07-24 19:06:36.575897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.116 [2024-07-24 19:06:36.575917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.116 [2024-07-24 19:06:36.579494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.116 [2024-07-24 19:06:36.588752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.116 [2024-07-24 19:06:36.589137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.116 [2024-07-24 19:06:36.589169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.116 [2024-07-24 19:06:36.589186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.116 [2024-07-24 19:06:36.589424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.116 [2024-07-24 19:06:36.589666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.116 [2024-07-24 19:06:36.589690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.116 [2024-07-24 19:06:36.589705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.116 [2024-07-24 19:06:36.593281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.116 [2024-07-24 19:06:36.602750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.116 [2024-07-24 19:06:36.603192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.116 [2024-07-24 19:06:36.603224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.116 [2024-07-24 19:06:36.603242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.116 [2024-07-24 19:06:36.603481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.116 [2024-07-24 19:06:36.603723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.116 [2024-07-24 19:06:36.603746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.116 [2024-07-24 19:06:36.603761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.116 [2024-07-24 19:06:36.607337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.116 [2024-07-24 19:06:36.616603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.116 [2024-07-24 19:06:36.617031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.116 [2024-07-24 19:06:36.617063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.116 [2024-07-24 19:06:36.617080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.116 [2024-07-24 19:06:36.617326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.116 [2024-07-24 19:06:36.617569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.116 [2024-07-24 19:06:36.617592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.116 [2024-07-24 19:06:36.617607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.116 [2024-07-24 19:06:36.621180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.116 [2024-07-24 19:06:36.630477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.116 [2024-07-24 19:06:36.630991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.116 [2024-07-24 19:06:36.631063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.116 [2024-07-24 19:06:36.631082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.116 [2024-07-24 19:06:36.631328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.116 [2024-07-24 19:06:36.631571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.116 [2024-07-24 19:06:36.631594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.116 [2024-07-24 19:06:36.631608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.116 [2024-07-24 19:06:36.635184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.116 [2024-07-24 19:06:36.644466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.116 [2024-07-24 19:06:36.644871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.116 [2024-07-24 19:06:36.644902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.116 [2024-07-24 19:06:36.644920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.116 [2024-07-24 19:06:36.645179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.116 [2024-07-24 19:06:36.645422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.116 [2024-07-24 19:06:36.645446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.116 [2024-07-24 19:06:36.645461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.116 [2024-07-24 19:06:36.649026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.116 [2024-07-24 19:06:36.658502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.116 [2024-07-24 19:06:36.658961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.116 [2024-07-24 19:06:36.658991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.116 [2024-07-24 19:06:36.659009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.116 [2024-07-24 19:06:36.659257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.116 [2024-07-24 19:06:36.659500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.116 [2024-07-24 19:06:36.659524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.116 [2024-07-24 19:06:36.659539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.116 [2024-07-24 19:06:36.663111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.116 [2024-07-24 19:06:36.672375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.116 [2024-07-24 19:06:36.672804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.116 [2024-07-24 19:06:36.672836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.116 [2024-07-24 19:06:36.672853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.116 [2024-07-24 19:06:36.673091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.116 [2024-07-24 19:06:36.673350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.116 [2024-07-24 19:06:36.673374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.116 [2024-07-24 19:06:36.673389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.116 [2024-07-24 19:06:36.676952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.116 [2024-07-24 19:06:36.686223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.116 [2024-07-24 19:06:36.686656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.116 [2024-07-24 19:06:36.686687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.116 [2024-07-24 19:06:36.686704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.116 [2024-07-24 19:06:36.686942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.116 [2024-07-24 19:06:36.687195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.116 [2024-07-24 19:06:36.687219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.116 [2024-07-24 19:06:36.687234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.116 [2024-07-24 19:06:36.690844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.116 [2024-07-24 19:06:36.700149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.117 [2024-07-24 19:06:36.700581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.117 [2024-07-24 19:06:36.700613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.117 [2024-07-24 19:06:36.700630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.117 [2024-07-24 19:06:36.700867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.117 [2024-07-24 19:06:36.701119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.117 [2024-07-24 19:06:36.701153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.117 [2024-07-24 19:06:36.701168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.117 [2024-07-24 19:06:36.704738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.117 [2024-07-24 19:06:36.714012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.117 [2024-07-24 19:06:36.714459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.117 [2024-07-24 19:06:36.714490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.117 [2024-07-24 19:06:36.714507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.117 [2024-07-24 19:06:36.714745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.117 [2024-07-24 19:06:36.714987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.117 [2024-07-24 19:06:36.715010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.117 [2024-07-24 19:06:36.715025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.376 [2024-07-24 19:06:36.718610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.376 [2024-07-24 19:06:36.727886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.376 [2024-07-24 19:06:36.728334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.376 [2024-07-24 19:06:36.728365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.376 [2024-07-24 19:06:36.728383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.376 [2024-07-24 19:06:36.728621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.376 [2024-07-24 19:06:36.728862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.376 [2024-07-24 19:06:36.728885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.376 [2024-07-24 19:06:36.728900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.376 [2024-07-24 19:06:36.732483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.376 [2024-07-24 19:06:36.741769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.376 [2024-07-24 19:06:36.742216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.376 [2024-07-24 19:06:36.742248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.376 [2024-07-24 19:06:36.742265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.376 [2024-07-24 19:06:36.742504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.377 [2024-07-24 19:06:36.742746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.377 [2024-07-24 19:06:36.742769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.377 [2024-07-24 19:06:36.742783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.377 [2024-07-24 19:06:36.746377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.377 [2024-07-24 19:06:36.755657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.377 [2024-07-24 19:06:36.756064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.377 [2024-07-24 19:06:36.756096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.377 [2024-07-24 19:06:36.756123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.377 [2024-07-24 19:06:36.756362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.377 [2024-07-24 19:06:36.756604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.377 [2024-07-24 19:06:36.756628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.377 [2024-07-24 19:06:36.756643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.377 [2024-07-24 19:06:36.760222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.377 [2024-07-24 19:06:36.769502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.377 [2024-07-24 19:06:36.769918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.377 [2024-07-24 19:06:36.769948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.377 [2024-07-24 19:06:36.769972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.377 [2024-07-24 19:06:36.770221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.377 [2024-07-24 19:06:36.770464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.377 [2024-07-24 19:06:36.770487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.377 [2024-07-24 19:06:36.770501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.377 [2024-07-24 19:06:36.774068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.377 [2024-07-24 19:06:36.783351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.377 [2024-07-24 19:06:36.783777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.377 [2024-07-24 19:06:36.783808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.377 [2024-07-24 19:06:36.783825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.377 [2024-07-24 19:06:36.784063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.377 [2024-07-24 19:06:36.784315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.377 [2024-07-24 19:06:36.784338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.377 [2024-07-24 19:06:36.784353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.377 [2024-07-24 19:06:36.787916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.377 [2024-07-24 19:06:36.797206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.377 [2024-07-24 19:06:36.797615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.377 [2024-07-24 19:06:36.797646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.377 [2024-07-24 19:06:36.797664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.377 [2024-07-24 19:06:36.797901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.377 [2024-07-24 19:06:36.798155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.377 [2024-07-24 19:06:36.798179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.377 [2024-07-24 19:06:36.798193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.377 [2024-07-24 19:06:36.801761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.377 [2024-07-24 19:06:36.811031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.377 [2024-07-24 19:06:36.811450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.377 [2024-07-24 19:06:36.811482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.377 [2024-07-24 19:06:36.811499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.377 [2024-07-24 19:06:36.811736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.377 [2024-07-24 19:06:36.811979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.377 [2024-07-24 19:06:36.812009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.377 [2024-07-24 19:06:36.812025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.377 [2024-07-24 19:06:36.815601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.377 [2024-07-24 19:06:36.824862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.377 [2024-07-24 19:06:36.825314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.377 [2024-07-24 19:06:36.825345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.377 [2024-07-24 19:06:36.825364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.377 [2024-07-24 19:06:36.825601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.377 [2024-07-24 19:06:36.825843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.377 [2024-07-24 19:06:36.825866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.377 [2024-07-24 19:06:36.825880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.377 [2024-07-24 19:06:36.829454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.377 [2024-07-24 19:06:36.838712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.377 [2024-07-24 19:06:36.841249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.377 [2024-07-24 19:06:36.841291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.377 [2024-07-24 19:06:36.841311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.377 [2024-07-24 19:06:36.841556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.377 [2024-07-24 19:06:36.841799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.377 [2024-07-24 19:06:36.841822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.377 [2024-07-24 19:06:36.841838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.377 [2024-07-24 19:06:36.845421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.377 [2024-07-24 19:06:36.852593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.377 [2024-07-24 19:06:36.853007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.377 [2024-07-24 19:06:36.853039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.377 [2024-07-24 19:06:36.853057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.377 [2024-07-24 19:06:36.853306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.377 [2024-07-24 19:06:36.853549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.377 [2024-07-24 19:06:36.853572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.377 [2024-07-24 19:06:36.853587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.377 [2024-07-24 19:06:36.857156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.377 [2024-07-24 19:06:36.866632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.377 [2024-07-24 19:06:36.867037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.377 [2024-07-24 19:06:36.867068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.377 [2024-07-24 19:06:36.867086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.377 [2024-07-24 19:06:36.867334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.377 [2024-07-24 19:06:36.867577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.377 [2024-07-24 19:06:36.867600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.377 [2024-07-24 19:06:36.867614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.377 [2024-07-24 19:06:36.871186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.377 [2024-07-24 19:06:36.880663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.377 [2024-07-24 19:06:36.881056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.377 [2024-07-24 19:06:36.881089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.377 [2024-07-24 19:06:36.881116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.378 [2024-07-24 19:06:36.881357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.378 [2024-07-24 19:06:36.881600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.378 [2024-07-24 19:06:36.881623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.378 [2024-07-24 19:06:36.881637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.378 [2024-07-24 19:06:36.885211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.378 [2024-07-24 19:06:36.894687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.378 [2024-07-24 19:06:36.895110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.378 [2024-07-24 19:06:36.895142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.378 [2024-07-24 19:06:36.895159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.378 [2024-07-24 19:06:36.895397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.378 [2024-07-24 19:06:36.895640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.378 [2024-07-24 19:06:36.895663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.378 [2024-07-24 19:06:36.895678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.378 [2024-07-24 19:06:36.899254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.378 [2024-07-24 19:06:36.908519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.378 [2024-07-24 19:06:36.908928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.378 [2024-07-24 19:06:36.908958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.378 [2024-07-24 19:06:36.908976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.378 [2024-07-24 19:06:36.909230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.378 [2024-07-24 19:06:36.909473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.378 [2024-07-24 19:06:36.909496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.378 [2024-07-24 19:06:36.909511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.378 [2024-07-24 19:06:36.913076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.378 [2024-07-24 19:06:36.922550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.378 [2024-07-24 19:06:36.922993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.378 [2024-07-24 19:06:36.923024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.378 [2024-07-24 19:06:36.923042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.378 [2024-07-24 19:06:36.923290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.378 [2024-07-24 19:06:36.923534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.378 [2024-07-24 19:06:36.923557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.378 [2024-07-24 19:06:36.923571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.378 [2024-07-24 19:06:36.927145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.378 [2024-07-24 19:06:36.936405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.378 [2024-07-24 19:06:36.936826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.378 [2024-07-24 19:06:36.936857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.378 [2024-07-24 19:06:36.936874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.378 [2024-07-24 19:06:36.937123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.378 [2024-07-24 19:06:36.937367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.378 [2024-07-24 19:06:36.937390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.378 [2024-07-24 19:06:36.937405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.378 [2024-07-24 19:06:36.940979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.378 [2024-07-24 19:06:36.950274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.378 [2024-07-24 19:06:36.950706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.378 [2024-07-24 19:06:36.950738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.378 [2024-07-24 19:06:36.950755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.378 [2024-07-24 19:06:36.950993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.378 [2024-07-24 19:06:36.951245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.378 [2024-07-24 19:06:36.951270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.378 [2024-07-24 19:06:36.951292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.378 [2024-07-24 19:06:36.954859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.378 [2024-07-24 19:06:36.964131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.378 [2024-07-24 19:06:36.964542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.378 [2024-07-24 19:06:36.964573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.378 [2024-07-24 19:06:36.964590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.378 [2024-07-24 19:06:36.964828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.378 [2024-07-24 19:06:36.965069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.378 [2024-07-24 19:06:36.965093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.378 [2024-07-24 19:06:36.965119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.378 [2024-07-24 19:06:36.968686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.638 [2024-07-24 19:06:36.978173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.638 [2024-07-24 19:06:36.978582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.638 [2024-07-24 19:06:36.978613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.638 [2024-07-24 19:06:36.978630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.638 [2024-07-24 19:06:36.978868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.638 [2024-07-24 19:06:36.979122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.638 [2024-07-24 19:06:36.979145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.638 [2024-07-24 19:06:36.979160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.638 [2024-07-24 19:06:36.982724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.638 [2024-07-24 19:06:36.992199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.638 [2024-07-24 19:06:36.992639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.638 [2024-07-24 19:06:36.992670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.638 [2024-07-24 19:06:36.992688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.638 [2024-07-24 19:06:36.992926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.638 [2024-07-24 19:06:36.993184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.638 [2024-07-24 19:06:36.993208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.638 [2024-07-24 19:06:36.993223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.638 [2024-07-24 19:06:36.996786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.638 [2024-07-24 19:06:37.006045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.638 [2024-07-24 19:06:37.006467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.638 [2024-07-24 19:06:37.006499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.638 [2024-07-24 19:06:37.006516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.638 [2024-07-24 19:06:37.006754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.638 [2024-07-24 19:06:37.006997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.638 [2024-07-24 19:06:37.007020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.638 [2024-07-24 19:06:37.007035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.638 [2024-07-24 19:06:37.010609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.638 [2024-07-24 19:06:37.020079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.638 [2024-07-24 19:06:37.020502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.638 [2024-07-24 19:06:37.020533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.638 [2024-07-24 19:06:37.020550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.638 [2024-07-24 19:06:37.020788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.638 [2024-07-24 19:06:37.021030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.638 [2024-07-24 19:06:37.021053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.638 [2024-07-24 19:06:37.021068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.638 [2024-07-24 19:06:37.024642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.638 [2024-07-24 19:06:37.034121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.638 [2024-07-24 19:06:37.034550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.638 [2024-07-24 19:06:37.034580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.638 [2024-07-24 19:06:37.034597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.638 [2024-07-24 19:06:37.034834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.638 [2024-07-24 19:06:37.035077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.638 [2024-07-24 19:06:37.035100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.638 [2024-07-24 19:06:37.035127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.638 [2024-07-24 19:06:37.038692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.638 [2024-07-24 19:06:37.047979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.638 [2024-07-24 19:06:37.048395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.638 [2024-07-24 19:06:37.048426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.638 [2024-07-24 19:06:37.048444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.638 [2024-07-24 19:06:37.048692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.638 [2024-07-24 19:06:37.048935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.638 [2024-07-24 19:06:37.048958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.638 [2024-07-24 19:06:37.048973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.638 [2024-07-24 19:06:37.052550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.638 [2024-07-24 19:06:37.061810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.638 [2024-07-24 19:06:37.062196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.638 [2024-07-24 19:06:37.062227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.638 [2024-07-24 19:06:37.062245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.638 [2024-07-24 19:06:37.062482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.638 [2024-07-24 19:06:37.062725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.638 [2024-07-24 19:06:37.062748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.638 [2024-07-24 19:06:37.062762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.638 [2024-07-24 19:06:37.066336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.638 [2024-07-24 19:06:37.075811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.638 [2024-07-24 19:06:37.076242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.638 [2024-07-24 19:06:37.076274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.638 [2024-07-24 19:06:37.076292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.638 [2024-07-24 19:06:37.076529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.638 [2024-07-24 19:06:37.076771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.638 [2024-07-24 19:06:37.076794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.638 [2024-07-24 19:06:37.076809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.638 [2024-07-24 19:06:37.080383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.638 [2024-07-24 19:06:37.089645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.638 [2024-07-24 19:06:37.090055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.638 [2024-07-24 19:06:37.090086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.638 [2024-07-24 19:06:37.090113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.638 [2024-07-24 19:06:37.090354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.638 [2024-07-24 19:06:37.090596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.638 [2024-07-24 19:06:37.090620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.638 [2024-07-24 19:06:37.090635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.639 [2024-07-24 19:06:37.094218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.639 [2024-07-24 19:06:37.103480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.639 [2024-07-24 19:06:37.103892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.639 [2024-07-24 19:06:37.103923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.639 [2024-07-24 19:06:37.103941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.639 [2024-07-24 19:06:37.104190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.639 [2024-07-24 19:06:37.104433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.639 [2024-07-24 19:06:37.104456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.639 [2024-07-24 19:06:37.104471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.639 [2024-07-24 19:06:37.108034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.639 [2024-07-24 19:06:37.117508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.639 [2024-07-24 19:06:37.117915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.639 [2024-07-24 19:06:37.117946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.639 [2024-07-24 19:06:37.117963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.639 [2024-07-24 19:06:37.118213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.639 [2024-07-24 19:06:37.118456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.639 [2024-07-24 19:06:37.118479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.639 [2024-07-24 19:06:37.118494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.639 [2024-07-24 19:06:37.122057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.639 [2024-07-24 19:06:37.131533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.639 [2024-07-24 19:06:37.131939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.639 [2024-07-24 19:06:37.131970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.639 [2024-07-24 19:06:37.131987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.639 [2024-07-24 19:06:37.132234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.639 [2024-07-24 19:06:37.132477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.639 [2024-07-24 19:06:37.132501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.639 [2024-07-24 19:06:37.132516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.639 [2024-07-24 19:06:37.136078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.639 [2024-07-24 19:06:37.145387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.639 [2024-07-24 19:06:37.145796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.639 [2024-07-24 19:06:37.145833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.639 [2024-07-24 19:06:37.145851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.639 [2024-07-24 19:06:37.146089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.639 [2024-07-24 19:06:37.146342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.639 [2024-07-24 19:06:37.146366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.639 [2024-07-24 19:06:37.146381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.639 [2024-07-24 19:06:37.149957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.639 [2024-07-24 19:06:37.159226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.639 [2024-07-24 19:06:37.159635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.639 [2024-07-24 19:06:37.159666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.639 [2024-07-24 19:06:37.159683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.639 [2024-07-24 19:06:37.159921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.639 [2024-07-24 19:06:37.160175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.639 [2024-07-24 19:06:37.160199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.639 [2024-07-24 19:06:37.160214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.639 [2024-07-24 19:06:37.163777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.639 [2024-07-24 19:06:37.173256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.639 [2024-07-24 19:06:37.173686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.639 [2024-07-24 19:06:37.173717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.639 [2024-07-24 19:06:37.173734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.639 [2024-07-24 19:06:37.173971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.639 [2024-07-24 19:06:37.174223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.639 [2024-07-24 19:06:37.174247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.639 [2024-07-24 19:06:37.174262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.639 [2024-07-24 19:06:37.177827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.639 [2024-07-24 19:06:37.187087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.639 [2024-07-24 19:06:37.187520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.639 [2024-07-24 19:06:37.187551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.639 [2024-07-24 19:06:37.187568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.639 [2024-07-24 19:06:37.187805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.639 [2024-07-24 19:06:37.188053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.639 [2024-07-24 19:06:37.188077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.639 [2024-07-24 19:06:37.188092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.639 [2024-07-24 19:06:37.191666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.639 [2024-07-24 19:06:37.200950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.639 [2024-07-24 19:06:37.201403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.639 [2024-07-24 19:06:37.201434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.639 [2024-07-24 19:06:37.201451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.639 [2024-07-24 19:06:37.201688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.639 [2024-07-24 19:06:37.201930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.639 [2024-07-24 19:06:37.201953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.639 [2024-07-24 19:06:37.201968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.639 [2024-07-24 19:06:37.205546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.639 [2024-07-24 19:06:37.214810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.639 [2024-07-24 19:06:37.215239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.639 [2024-07-24 19:06:37.215270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.639 [2024-07-24 19:06:37.215288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.639 [2024-07-24 19:06:37.215525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.639 [2024-07-24 19:06:37.215768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.639 [2024-07-24 19:06:37.215791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.639 [2024-07-24 19:06:37.215806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.639 [2024-07-24 19:06:37.219382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.639 [2024-07-24 19:06:37.228645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.639 [2024-07-24 19:06:37.229077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.639 [2024-07-24 19:06:37.229114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.639 [2024-07-24 19:06:37.229134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.639 [2024-07-24 19:06:37.229372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.639 [2024-07-24 19:06:37.229614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.639 [2024-07-24 19:06:37.229637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.639 [2024-07-24 19:06:37.229652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.639 [2024-07-24 19:06:37.233225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.899 [2024-07-24 19:06:37.242505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.899 [2024-07-24 19:06:37.242948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.899 [2024-07-24 19:06:37.242979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.899 [2024-07-24 19:06:37.242997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.899 [2024-07-24 19:06:37.243247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.899 [2024-07-24 19:06:37.243490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.899 [2024-07-24 19:06:37.243513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.899 [2024-07-24 19:06:37.243527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.899 [2024-07-24 19:06:37.247113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.899 [2024-07-24 19:06:37.256377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.899 [2024-07-24 19:06:37.256784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.899 [2024-07-24 19:06:37.256815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.899 [2024-07-24 19:06:37.256832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.899 [2024-07-24 19:06:37.257070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.899 [2024-07-24 19:06:37.257321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.899 [2024-07-24 19:06:37.257346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.899 [2024-07-24 19:06:37.257361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.899 [2024-07-24 19:06:37.260924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.899 [2024-07-24 19:06:37.270400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.899 [2024-07-24 19:06:37.270829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.899 [2024-07-24 19:06:37.270859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.899 [2024-07-24 19:06:37.270877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.899 [2024-07-24 19:06:37.271124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.899 [2024-07-24 19:06:37.271366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.899 [2024-07-24 19:06:37.271389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.899 [2024-07-24 19:06:37.271404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.899 [2024-07-24 19:06:37.274969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.899 [2024-07-24 19:06:37.284238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.899 [2024-07-24 19:06:37.284621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.899 [2024-07-24 19:06:37.284651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.899 [2024-07-24 19:06:37.284675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.899 [2024-07-24 19:06:37.284913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.899 [2024-07-24 19:06:37.285166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.899 [2024-07-24 19:06:37.285190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.899 [2024-07-24 19:06:37.285205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.899 [2024-07-24 19:06:37.288770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.899 [2024-07-24 19:06:37.298252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.899 [2024-07-24 19:06:37.298694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.899 [2024-07-24 19:06:37.298724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.899 [2024-07-24 19:06:37.298742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.899 [2024-07-24 19:06:37.298979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.899 [2024-07-24 19:06:37.299232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.899 [2024-07-24 19:06:37.299255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.899 [2024-07-24 19:06:37.299270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.899 [2024-07-24 19:06:37.302833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.899 [2024-07-24 19:06:37.312095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.899 [2024-07-24 19:06:37.312516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.899 [2024-07-24 19:06:37.312547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.899 [2024-07-24 19:06:37.312564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.899 [2024-07-24 19:06:37.312801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.899 [2024-07-24 19:06:37.313043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.899 [2024-07-24 19:06:37.313066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.899 [2024-07-24 19:06:37.313080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.899 [2024-07-24 19:06:37.316653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.899 [2024-07-24 19:06:37.326128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.899 [2024-07-24 19:06:37.326530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.899 [2024-07-24 19:06:37.326560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.899 [2024-07-24 19:06:37.326577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.899 [2024-07-24 19:06:37.326814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.899 [2024-07-24 19:06:37.327056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.899 [2024-07-24 19:06:37.327084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.899 [2024-07-24 19:06:37.327100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.899 [2024-07-24 19:06:37.330681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.899 [2024-07-24 19:06:37.340169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.899 [2024-07-24 19:06:37.340610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.899 [2024-07-24 19:06:37.340641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.899 [2024-07-24 19:06:37.340659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.899 [2024-07-24 19:06:37.340897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.899 [2024-07-24 19:06:37.341150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.900 [2024-07-24 19:06:37.341174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.900 [2024-07-24 19:06:37.341189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.900 [2024-07-24 19:06:37.344754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.900 [2024-07-24 19:06:37.354042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.900 [2024-07-24 19:06:37.354471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.900 [2024-07-24 19:06:37.354503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.900 [2024-07-24 19:06:37.354521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.900 [2024-07-24 19:06:37.354759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.900 [2024-07-24 19:06:37.355001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.900 [2024-07-24 19:06:37.355024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.900 [2024-07-24 19:06:37.355039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.900 [2024-07-24 19:06:37.358619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.900 [2024-07-24 19:06:37.367883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.900 [2024-07-24 19:06:37.368275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.900 [2024-07-24 19:06:37.368307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.900 [2024-07-24 19:06:37.368324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.900 [2024-07-24 19:06:37.368560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.900 [2024-07-24 19:06:37.368802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.900 [2024-07-24 19:06:37.368825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.900 [2024-07-24 19:06:37.368840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.900 [2024-07-24 19:06:37.372414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.900 [2024-07-24 19:06:37.381890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.900 [2024-07-24 19:06:37.382287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.900 [2024-07-24 19:06:37.382318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.900 [2024-07-24 19:06:37.382335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.900 [2024-07-24 19:06:37.382573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.900 [2024-07-24 19:06:37.382815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.900 [2024-07-24 19:06:37.382838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.900 [2024-07-24 19:06:37.382853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.900 [2024-07-24 19:06:37.386664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.900 [2024-07-24 19:06:37.395930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.900 [2024-07-24 19:06:37.396347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.900 [2024-07-24 19:06:37.396378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.900 [2024-07-24 19:06:37.396396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.900 [2024-07-24 19:06:37.396634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.900 [2024-07-24 19:06:37.396875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.900 [2024-07-24 19:06:37.396899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.900 [2024-07-24 19:06:37.396914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.900 [2024-07-24 19:06:37.400489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.900 [2024-07-24 19:06:37.409956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.900 [2024-07-24 19:06:37.410394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.900 [2024-07-24 19:06:37.410426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.900 [2024-07-24 19:06:37.410443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.900 [2024-07-24 19:06:37.410680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.900 [2024-07-24 19:06:37.410921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.900 [2024-07-24 19:06:37.410944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.900 [2024-07-24 19:06:37.410959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.900 [2024-07-24 19:06:37.414531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.900 [2024-07-24 19:06:37.423788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.900 [2024-07-24 19:06:37.424199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.900 [2024-07-24 19:06:37.424231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.900 [2024-07-24 19:06:37.424249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.900 [2024-07-24 19:06:37.424493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.900 [2024-07-24 19:06:37.424736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.900 [2024-07-24 19:06:37.424759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.900 [2024-07-24 19:06:37.424774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.900 [2024-07-24 19:06:37.428348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.900 [2024-07-24 19:06:37.437819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.900 [2024-07-24 19:06:37.438233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.900 [2024-07-24 19:06:37.438263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.900 [2024-07-24 19:06:37.438281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.900 [2024-07-24 19:06:37.438518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.900 [2024-07-24 19:06:37.438760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.900 [2024-07-24 19:06:37.438784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.900 [2024-07-24 19:06:37.438799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.900 [2024-07-24 19:06:37.442385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.900 [2024-07-24 19:06:37.451662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.900 [2024-07-24 19:06:37.452050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.900 [2024-07-24 19:06:37.452081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.900 [2024-07-24 19:06:37.452098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.900 [2024-07-24 19:06:37.452349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.900 [2024-07-24 19:06:37.452591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.900 [2024-07-24 19:06:37.452614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.900 [2024-07-24 19:06:37.452629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.900 [2024-07-24 19:06:37.456201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.900 [2024-07-24 19:06:37.465668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.900 [2024-07-24 19:06:37.466078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.900 [2024-07-24 19:06:37.466115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.900 [2024-07-24 19:06:37.466135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.900 [2024-07-24 19:06:37.466373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.900 [2024-07-24 19:06:37.466616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.900 [2024-07-24 19:06:37.466639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.900 [2024-07-24 19:06:37.466659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.900 [2024-07-24 19:06:37.470236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.900 [2024-07-24 19:06:37.479498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.900 [2024-07-24 19:06:37.479910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.900 [2024-07-24 19:06:37.479941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.900 [2024-07-24 19:06:37.479958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.900 [2024-07-24 19:06:37.480208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.900 [2024-07-24 19:06:37.480452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.900 [2024-07-24 19:06:37.480475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.901 [2024-07-24 19:06:37.480490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.901 [2024-07-24 19:06:37.484053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.901 [2024-07-24 19:06:37.493538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.901 [2024-07-24 19:06:37.493975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.901 [2024-07-24 19:06:37.494006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:23:59.901 [2024-07-24 19:06:37.494023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:23:59.901 [2024-07-24 19:06:37.494272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:23:59.901 [2024-07-24 19:06:37.494515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.901 [2024-07-24 19:06:37.494538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.901 [2024-07-24 19:06:37.494553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.901 [2024-07-24 19:06:37.498125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.161 [2024-07-24 19:06:37.507389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.161 [2024-07-24 19:06:37.507818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.161 [2024-07-24 19:06:37.507849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.161 [2024-07-24 19:06:37.507866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.161 [2024-07-24 19:06:37.508113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.161 [2024-07-24 19:06:37.508355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.161 [2024-07-24 19:06:37.508379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.161 [2024-07-24 19:06:37.508394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.161 [2024-07-24 19:06:37.511960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.161 [2024-07-24 19:06:37.521225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.161 [2024-07-24 19:06:37.521677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.161 [2024-07-24 19:06:37.521708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.161 [2024-07-24 19:06:37.521725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.161 [2024-07-24 19:06:37.521963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.161 [2024-07-24 19:06:37.522216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.161 [2024-07-24 19:06:37.522240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.161 [2024-07-24 19:06:37.522255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.161 [2024-07-24 19:06:37.525820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.161 [2024-07-24 19:06:37.535075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.161 [2024-07-24 19:06:37.535491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.161 [2024-07-24 19:06:37.535522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.161 [2024-07-24 19:06:37.535539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.161 [2024-07-24 19:06:37.535777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.161 [2024-07-24 19:06:37.536019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.161 [2024-07-24 19:06:37.536042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.161 [2024-07-24 19:06:37.536056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.161 [2024-07-24 19:06:37.539632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.161 [2024-07-24 19:06:37.548915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.161 [2024-07-24 19:06:37.549368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.161 [2024-07-24 19:06:37.549399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.161 [2024-07-24 19:06:37.549416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.161 [2024-07-24 19:06:37.549654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.161 [2024-07-24 19:06:37.549896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.161 [2024-07-24 19:06:37.549919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.161 [2024-07-24 19:06:37.549934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.161 [2024-07-24 19:06:37.553511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.161 [2024-07-24 19:06:37.562795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.161 [2024-07-24 19:06:37.563201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.161 [2024-07-24 19:06:37.563233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.161 [2024-07-24 19:06:37.563251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.161 [2024-07-24 19:06:37.563489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.161 [2024-07-24 19:06:37.563737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.161 [2024-07-24 19:06:37.563760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.161 [2024-07-24 19:06:37.563776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.161 [2024-07-24 19:06:37.567357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.161 [2024-07-24 19:06:37.576832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.161 [2024-07-24 19:06:37.577286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.161 [2024-07-24 19:06:37.577317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.161 [2024-07-24 19:06:37.577335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.161 [2024-07-24 19:06:37.577573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.161 [2024-07-24 19:06:37.577815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.161 [2024-07-24 19:06:37.577838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.161 [2024-07-24 19:06:37.577853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.161 [2024-07-24 19:06:37.581429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.161 [2024-07-24 19:06:37.590694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.161 [2024-07-24 19:06:37.591078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.161 [2024-07-24 19:06:37.591118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.161 [2024-07-24 19:06:37.591138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.161 [2024-07-24 19:06:37.591376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.161 [2024-07-24 19:06:37.591619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.161 [2024-07-24 19:06:37.591642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.161 [2024-07-24 19:06:37.591657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.161 [2024-07-24 19:06:37.595240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.161 [2024-07-24 19:06:37.604745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.161 [2024-07-24 19:06:37.605155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.161 [2024-07-24 19:06:37.605187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.161 [2024-07-24 19:06:37.605204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.161 [2024-07-24 19:06:37.605442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.161 [2024-07-24 19:06:37.605685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.161 [2024-07-24 19:06:37.605708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.161 [2024-07-24 19:06:37.605723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.161 [2024-07-24 19:06:37.609310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.161 [2024-07-24 19:06:37.618586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.161 [2024-07-24 19:06:37.618992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.161 [2024-07-24 19:06:37.619024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.161 [2024-07-24 19:06:37.619042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.161 [2024-07-24 19:06:37.619289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.161 [2024-07-24 19:06:37.619533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.161 [2024-07-24 19:06:37.619556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.161 [2024-07-24 19:06:37.619571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.161 [2024-07-24 19:06:37.623148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.161 [2024-07-24 19:06:37.632424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.161 [2024-07-24 19:06:37.632838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.161 [2024-07-24 19:06:37.632869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.161 [2024-07-24 19:06:37.632887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.161 [2024-07-24 19:06:37.633135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.161 [2024-07-24 19:06:37.633378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.161 [2024-07-24 19:06:37.633401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.161 [2024-07-24 19:06:37.633416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.161 [2024-07-24 19:06:37.636981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.162 [2024-07-24 19:06:37.646272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.162 [2024-07-24 19:06:37.646714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.162 [2024-07-24 19:06:37.646745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.162 [2024-07-24 19:06:37.646762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.162 [2024-07-24 19:06:37.647000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.162 [2024-07-24 19:06:37.647252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.162 [2024-07-24 19:06:37.647276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.162 [2024-07-24 19:06:37.647291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.162 [2024-07-24 19:06:37.650871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.162 [2024-07-24 19:06:37.660144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.162 [2024-07-24 19:06:37.660549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.162 [2024-07-24 19:06:37.660580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.162 [2024-07-24 19:06:37.660603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.162 [2024-07-24 19:06:37.660842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.162 [2024-07-24 19:06:37.661084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.162 [2024-07-24 19:06:37.661116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.162 [2024-07-24 19:06:37.661132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.162 [2024-07-24 19:06:37.664700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.162 [2024-07-24 19:06:37.673974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.162 [2024-07-24 19:06:37.674397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.162 [2024-07-24 19:06:37.674429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.162 [2024-07-24 19:06:37.674447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.162 [2024-07-24 19:06:37.674684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.162 [2024-07-24 19:06:37.674926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.162 [2024-07-24 19:06:37.674950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.162 [2024-07-24 19:06:37.674965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.162 [2024-07-24 19:06:37.678546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.162 [2024-07-24 19:06:37.687815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.162 [2024-07-24 19:06:37.688249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.162 [2024-07-24 19:06:37.688280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.162 [2024-07-24 19:06:37.688297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.162 [2024-07-24 19:06:37.688535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.162 [2024-07-24 19:06:37.688777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.162 [2024-07-24 19:06:37.688800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.162 [2024-07-24 19:06:37.688815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.162 [2024-07-24 19:06:37.692394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.162 [2024-07-24 19:06:37.701668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.162 [2024-07-24 19:06:37.702055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.162 [2024-07-24 19:06:37.702086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.162 [2024-07-24 19:06:37.702110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.162 [2024-07-24 19:06:37.702350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.162 [2024-07-24 19:06:37.702599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.162 [2024-07-24 19:06:37.702622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.162 [2024-07-24 19:06:37.702637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.162 [2024-07-24 19:06:37.706219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.162 [2024-07-24 19:06:37.715778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.162 [2024-07-24 19:06:37.716166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.162 [2024-07-24 19:06:37.716198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.162 [2024-07-24 19:06:37.716216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.162 [2024-07-24 19:06:37.716454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.162 [2024-07-24 19:06:37.716697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.162 [2024-07-24 19:06:37.716721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.162 [2024-07-24 19:06:37.716735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.162 [2024-07-24 19:06:37.720308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.162 [2024-07-24 19:06:37.729781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.162 [2024-07-24 19:06:37.730186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.162 [2024-07-24 19:06:37.730218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.162 [2024-07-24 19:06:37.730236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.162 [2024-07-24 19:06:37.730475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.162 [2024-07-24 19:06:37.730717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.162 [2024-07-24 19:06:37.730740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.162 [2024-07-24 19:06:37.730755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.162 [2024-07-24 19:06:37.734330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.162 [2024-07-24 19:06:37.743814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.162 [2024-07-24 19:06:37.744250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.162 [2024-07-24 19:06:37.744282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.162 [2024-07-24 19:06:37.744299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.162 [2024-07-24 19:06:37.744537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.162 [2024-07-24 19:06:37.744779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.162 [2024-07-24 19:06:37.744802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.162 [2024-07-24 19:06:37.744817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.162 [2024-07-24 19:06:37.748394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.162 [2024-07-24 19:06:37.757681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.162 [2024-07-24 19:06:37.758091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.162 [2024-07-24 19:06:37.758129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.162 [2024-07-24 19:06:37.758147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.162 [2024-07-24 19:06:37.758385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.162 [2024-07-24 19:06:37.758627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.162 [2024-07-24 19:06:37.758650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.162 [2024-07-24 19:06:37.758665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.421 [2024-07-24 19:06:37.762242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.421 [2024-07-24 19:06:37.771729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.421 [2024-07-24 19:06:37.772162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.421 [2024-07-24 19:06:37.772194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.421 [2024-07-24 19:06:37.772211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.421 [2024-07-24 19:06:37.772449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.422 [2024-07-24 19:06:37.772692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.422 [2024-07-24 19:06:37.772715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.422 [2024-07-24 19:06:37.772730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.422 [2024-07-24 19:06:37.776310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.422 [2024-07-24 19:06:37.785576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.422 [2024-07-24 19:06:37.786016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.422 [2024-07-24 19:06:37.786048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.422 [2024-07-24 19:06:37.786066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.422 [2024-07-24 19:06:37.786314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.422 [2024-07-24 19:06:37.786558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.422 [2024-07-24 19:06:37.786581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.422 [2024-07-24 19:06:37.786597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.422 [2024-07-24 19:06:37.790169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.422 [2024-07-24 19:06:37.799435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.422 [2024-07-24 19:06:37.799839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.422 [2024-07-24 19:06:37.799870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.422 [2024-07-24 19:06:37.799893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.422 [2024-07-24 19:06:37.800141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.422 [2024-07-24 19:06:37.800384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.422 [2024-07-24 19:06:37.800408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.422 [2024-07-24 19:06:37.800423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.422 [2024-07-24 19:06:37.803989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.422 [2024-07-24 19:06:37.813466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.422 [2024-07-24 19:06:37.813913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.422 [2024-07-24 19:06:37.813943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.422 [2024-07-24 19:06:37.813961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.422 [2024-07-24 19:06:37.814208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.422 [2024-07-24 19:06:37.814451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.422 [2024-07-24 19:06:37.814474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.422 [2024-07-24 19:06:37.814490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.422 [2024-07-24 19:06:37.818053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.422 [2024-07-24 19:06:37.827325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.422 [2024-07-24 19:06:37.827765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.422 [2024-07-24 19:06:37.827795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.422 [2024-07-24 19:06:37.827813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.422 [2024-07-24 19:06:37.828050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.422 [2024-07-24 19:06:37.828301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.422 [2024-07-24 19:06:37.828325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.422 [2024-07-24 19:06:37.828340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.422 [2024-07-24 19:06:37.831907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.422 [2024-07-24 19:06:37.841210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.422 [2024-07-24 19:06:37.841624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.422 [2024-07-24 19:06:37.841655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.422 [2024-07-24 19:06:37.841672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.422 [2024-07-24 19:06:37.841910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.422 [2024-07-24 19:06:37.842166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.422 [2024-07-24 19:06:37.842195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.422 [2024-07-24 19:06:37.842211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.422 [2024-07-24 19:06:37.845781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.422 [2024-07-24 19:06:37.855072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.422 [2024-07-24 19:06:37.855523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.422 [2024-07-24 19:06:37.855554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.422 [2024-07-24 19:06:37.855571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.422 [2024-07-24 19:06:37.855809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.422 [2024-07-24 19:06:37.856051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.422 [2024-07-24 19:06:37.856074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.422 [2024-07-24 19:06:37.856089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.422 [2024-07-24 19:06:37.859665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.422 [2024-07-24 19:06:37.868937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.422 [2024-07-24 19:06:37.869374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.422 [2024-07-24 19:06:37.869405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.422 [2024-07-24 19:06:37.869423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.422 [2024-07-24 19:06:37.869660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.422 [2024-07-24 19:06:37.869901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.422 [2024-07-24 19:06:37.869925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.422 [2024-07-24 19:06:37.869940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.422 [2024-07-24 19:06:37.873519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.422 [2024-07-24 19:06:37.882811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.422 [2024-07-24 19:06:37.883216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.422 [2024-07-24 19:06:37.883248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.422 [2024-07-24 19:06:37.883265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.422 [2024-07-24 19:06:37.883503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.422 [2024-07-24 19:06:37.883746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.422 [2024-07-24 19:06:37.883769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.422 [2024-07-24 19:06:37.883783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.422 [2024-07-24 19:06:37.887356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.422 [2024-07-24 19:06:37.896843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.422 [2024-07-24 19:06:37.897241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.422 [2024-07-24 19:06:37.897272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.422 [2024-07-24 19:06:37.897290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.422 [2024-07-24 19:06:37.897528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.422 [2024-07-24 19:06:37.897770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.422 [2024-07-24 19:06:37.897793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.422 [2024-07-24 19:06:37.897808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.422 [2024-07-24 19:06:37.901391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.422 [2024-07-24 19:06:37.910869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.422 [2024-07-24 19:06:37.911284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.422 [2024-07-24 19:06:37.911316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.422 [2024-07-24 19:06:37.911333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.422 [2024-07-24 19:06:37.911572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.422 [2024-07-24 19:06:37.911814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.423 [2024-07-24 19:06:37.911838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.423 [2024-07-24 19:06:37.911852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.423 [2024-07-24 19:06:37.915437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.423 [2024-07-24 19:06:37.924704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.423 [2024-07-24 19:06:37.925138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.423 [2024-07-24 19:06:37.925168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.423 [2024-07-24 19:06:37.925186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.423 [2024-07-24 19:06:37.925423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.423 [2024-07-24 19:06:37.925664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.423 [2024-07-24 19:06:37.925688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.423 [2024-07-24 19:06:37.925703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.423 [2024-07-24 19:06:37.929284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.423 [2024-07-24 19:06:37.938551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.423 [2024-07-24 19:06:37.938956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.423 [2024-07-24 19:06:37.938987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.423 [2024-07-24 19:06:37.939005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.423 [2024-07-24 19:06:37.939261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.423 [2024-07-24 19:06:37.939504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.423 [2024-07-24 19:06:37.939527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.423 [2024-07-24 19:06:37.939542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.423 [2024-07-24 19:06:37.943125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.423 [2024-07-24 19:06:37.952407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.423 [2024-07-24 19:06:37.952843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.423 [2024-07-24 19:06:37.952874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.423 [2024-07-24 19:06:37.952891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.423 [2024-07-24 19:06:37.953141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.423 [2024-07-24 19:06:37.953385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.423 [2024-07-24 19:06:37.953408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.423 [2024-07-24 19:06:37.953423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.423 [2024-07-24 19:06:37.956991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.423 [2024-07-24 19:06:37.966288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.423 [2024-07-24 19:06:37.966720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.423 [2024-07-24 19:06:37.966751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.423 [2024-07-24 19:06:37.966769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.423 [2024-07-24 19:06:37.967006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.423 [2024-07-24 19:06:37.967263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.423 [2024-07-24 19:06:37.967287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.423 [2024-07-24 19:06:37.967303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.423 [2024-07-24 19:06:37.970872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.423 [2024-07-24 19:06:37.980185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.423 [2024-07-24 19:06:37.980588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.423 [2024-07-24 19:06:37.980619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.423 [2024-07-24 19:06:37.980636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.423 [2024-07-24 19:06:37.980874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.423 [2024-07-24 19:06:37.981127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.423 [2024-07-24 19:06:37.981153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.423 [2024-07-24 19:06:37.981174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.423 [2024-07-24 19:06:37.984748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.423 [2024-07-24 19:06:37.994035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.423 [2024-07-24 19:06:37.994449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.423 [2024-07-24 19:06:37.994480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.423 [2024-07-24 19:06:37.994498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.423 [2024-07-24 19:06:37.994735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.423 [2024-07-24 19:06:37.994977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.423 [2024-07-24 19:06:37.995000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.423 [2024-07-24 19:06:37.995015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.423 [2024-07-24 19:06:37.998594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.423 [2024-07-24 19:06:38.008076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.423 [2024-07-24 19:06:38.008516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.423 [2024-07-24 19:06:38.008547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.423 [2024-07-24 19:06:38.008565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.423 [2024-07-24 19:06:38.008803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.423 [2024-07-24 19:06:38.009045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.423 [2024-07-24 19:06:38.009068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.423 [2024-07-24 19:06:38.009082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.423 [2024-07-24 19:06:38.012661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.423 [2024-07-24 19:06:38.021930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.423 [2024-07-24 19:06:38.022343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.423 [2024-07-24 19:06:38.022374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.423 [2024-07-24 19:06:38.022391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.684 [2024-07-24 19:06:38.022629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.684 [2024-07-24 19:06:38.022872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.684 [2024-07-24 19:06:38.022894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.684 [2024-07-24 19:06:38.022909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.684 [2024-07-24 19:06:38.026491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.684 [2024-07-24 19:06:38.035968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.684 [2024-07-24 19:06:38.036405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.684 [2024-07-24 19:06:38.036442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.684 [2024-07-24 19:06:38.036461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.684 [2024-07-24 19:06:38.036698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.684 [2024-07-24 19:06:38.036941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.684 [2024-07-24 19:06:38.036965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.684 [2024-07-24 19:06:38.036979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.684 [2024-07-24 19:06:38.040569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.684 [2024-07-24 19:06:38.049848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.684 [2024-07-24 19:06:38.050262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.684 [2024-07-24 19:06:38.050294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.684 [2024-07-24 19:06:38.050311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.684 [2024-07-24 19:06:38.050549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.684 [2024-07-24 19:06:38.050791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.684 [2024-07-24 19:06:38.050814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.684 [2024-07-24 19:06:38.050830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.684 [2024-07-24 19:06:38.054410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.684 [2024-07-24 19:06:38.063682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.684 [2024-07-24 19:06:38.064187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.684 [2024-07-24 19:06:38.064219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.684 [2024-07-24 19:06:38.064236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.684 [2024-07-24 19:06:38.064474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.684 [2024-07-24 19:06:38.064717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.684 [2024-07-24 19:06:38.064741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.684 [2024-07-24 19:06:38.064755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.684 [2024-07-24 19:06:38.068338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.684 [2024-07-24 19:06:38.077605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.684 [2024-07-24 19:06:38.078009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.684 [2024-07-24 19:06:38.078041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.684 [2024-07-24 19:06:38.078058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.684 [2024-07-24 19:06:38.078308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.684 [2024-07-24 19:06:38.078558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.684 [2024-07-24 19:06:38.078581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.684 [2024-07-24 19:06:38.078595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.684 [2024-07-24 19:06:38.082170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.684 [2024-07-24 19:06:38.091439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.684 [2024-07-24 19:06:38.091874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.684 [2024-07-24 19:06:38.091905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.684 [2024-07-24 19:06:38.091923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.684 [2024-07-24 19:06:38.092174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.684 [2024-07-24 19:06:38.092418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.684 [2024-07-24 19:06:38.092441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.684 [2024-07-24 19:06:38.092456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.684 [2024-07-24 19:06:38.096026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.684 [2024-07-24 19:06:38.105305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.684 [2024-07-24 19:06:38.105706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.684 [2024-07-24 19:06:38.105737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.684 [2024-07-24 19:06:38.105754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.684 [2024-07-24 19:06:38.105992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.684 [2024-07-24 19:06:38.106247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.684 [2024-07-24 19:06:38.106271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.684 [2024-07-24 19:06:38.106287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.684 [2024-07-24 19:06:38.109853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.684 [2024-07-24 19:06:38.119333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.684 [2024-07-24 19:06:38.119745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.684 [2024-07-24 19:06:38.119776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.684 [2024-07-24 19:06:38.119794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.684 [2024-07-24 19:06:38.120032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.684 [2024-07-24 19:06:38.120286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.684 [2024-07-24 19:06:38.120310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.684 [2024-07-24 19:06:38.120324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.684 [2024-07-24 19:06:38.123900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.684 [2024-07-24 19:06:38.133189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.684 [2024-07-24 19:06:38.133631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.684 [2024-07-24 19:06:38.133661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.684 [2024-07-24 19:06:38.133678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.684 [2024-07-24 19:06:38.133916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.684 [2024-07-24 19:06:38.134170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.684 [2024-07-24 19:06:38.134194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.684 [2024-07-24 19:06:38.134209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.684 [2024-07-24 19:06:38.137776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.684 [2024-07-24 19:06:38.147055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.684 [2024-07-24 19:06:38.147495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.684 [2024-07-24 19:06:38.147526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.684 [2024-07-24 19:06:38.147543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.685 [2024-07-24 19:06:38.147781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.685 [2024-07-24 19:06:38.148023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.685 [2024-07-24 19:06:38.148046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.685 [2024-07-24 19:06:38.148061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.685 [2024-07-24 19:06:38.151654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.685 [2024-07-24 19:06:38.160923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.685 [2024-07-24 19:06:38.161372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.685 [2024-07-24 19:06:38.161403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.685 [2024-07-24 19:06:38.161420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.685 [2024-07-24 19:06:38.161658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.685 [2024-07-24 19:06:38.161900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.685 [2024-07-24 19:06:38.161924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.685 [2024-07-24 19:06:38.161939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.685 [2024-07-24 19:06:38.165519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.685 [2024-07-24 19:06:38.174788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.685 [2024-07-24 19:06:38.175198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.685 [2024-07-24 19:06:38.175229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.685 [2024-07-24 19:06:38.175252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.685 [2024-07-24 19:06:38.175492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.685 [2024-07-24 19:06:38.175735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.685 [2024-07-24 19:06:38.175758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.685 [2024-07-24 19:06:38.175773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.685 [2024-07-24 19:06:38.179350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.685 [2024-07-24 19:06:38.188630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.685 [2024-07-24 19:06:38.189037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.685 [2024-07-24 19:06:38.189068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.685 [2024-07-24 19:06:38.189085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.685 [2024-07-24 19:06:38.189334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.685 [2024-07-24 19:06:38.189577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.685 [2024-07-24 19:06:38.189600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.685 [2024-07-24 19:06:38.189615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.685 [2024-07-24 19:06:38.193194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.685 [2024-07-24 19:06:38.202463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.685 [2024-07-24 19:06:38.202869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.685 [2024-07-24 19:06:38.202900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.685 [2024-07-24 19:06:38.202918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.685 [2024-07-24 19:06:38.203169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.685 [2024-07-24 19:06:38.203413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.685 [2024-07-24 19:06:38.203436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.685 [2024-07-24 19:06:38.203451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.685 [2024-07-24 19:06:38.207017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.685 [2024-07-24 19:06:38.216503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.685 [2024-07-24 19:06:38.216908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.685 [2024-07-24 19:06:38.216940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.685 [2024-07-24 19:06:38.216957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.685 [2024-07-24 19:06:38.217208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.685 [2024-07-24 19:06:38.217452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.685 [2024-07-24 19:06:38.217481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.685 [2024-07-24 19:06:38.217497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.685 [2024-07-24 19:06:38.221065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.685 [2024-07-24 19:06:38.230363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.685 [2024-07-24 19:06:38.230795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.685 [2024-07-24 19:06:38.230826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.685 [2024-07-24 19:06:38.230844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.685 [2024-07-24 19:06:38.231081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.685 [2024-07-24 19:06:38.231335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.685 [2024-07-24 19:06:38.231359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.685 [2024-07-24 19:06:38.231374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.685 [2024-07-24 19:06:38.234942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.685 [2024-07-24 19:06:38.244229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.685 [2024-07-24 19:06:38.244658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.685 [2024-07-24 19:06:38.244689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.685 [2024-07-24 19:06:38.244707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.685 [2024-07-24 19:06:38.244944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.685 [2024-07-24 19:06:38.245198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.685 [2024-07-24 19:06:38.245222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.685 [2024-07-24 19:06:38.245237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.685 [2024-07-24 19:06:38.248804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.685 [2024-07-24 19:06:38.258082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.685 [2024-07-24 19:06:38.258516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.685 [2024-07-24 19:06:38.258555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.685 [2024-07-24 19:06:38.258575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.685 [2024-07-24 19:06:38.258814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.685 [2024-07-24 19:06:38.259056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.685 [2024-07-24 19:06:38.259079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.685 [2024-07-24 19:06:38.259094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.685 [2024-07-24 19:06:38.262676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.685 [2024-07-24 19:06:38.271953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.685 [2024-07-24 19:06:38.272365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.685 [2024-07-24 19:06:38.272396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.685 [2024-07-24 19:06:38.272414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.685 [2024-07-24 19:06:38.272652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.685 [2024-07-24 19:06:38.272894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.685 [2024-07-24 19:06:38.272917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.685 [2024-07-24 19:06:38.272932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.685 [2024-07-24 19:06:38.276510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.945 [2024-07-24 19:06:38.285990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.945 [2024-07-24 19:06:38.286424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.945 [2024-07-24 19:06:38.286455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.945 [2024-07-24 19:06:38.286473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.945 [2024-07-24 19:06:38.286710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.945 [2024-07-24 19:06:38.286953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.945 [2024-07-24 19:06:38.286976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.945 [2024-07-24 19:06:38.286991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.945 [2024-07-24 19:06:38.290617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.945 [2024-07-24 19:06:38.299890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.945 [2024-07-24 19:06:38.300316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.945 [2024-07-24 19:06:38.300348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.945 [2024-07-24 19:06:38.300365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.945 [2024-07-24 19:06:38.300603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.945 [2024-07-24 19:06:38.300846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.945 [2024-07-24 19:06:38.300869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.945 [2024-07-24 19:06:38.300883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.945 [2024-07-24 19:06:38.304466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.945 [2024-07-24 19:06:38.313735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.945 [2024-07-24 19:06:38.314141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.945 [2024-07-24 19:06:38.314173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.945 [2024-07-24 19:06:38.314191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.945 [2024-07-24 19:06:38.314435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.945 [2024-07-24 19:06:38.314678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.945 [2024-07-24 19:06:38.314701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.945 [2024-07-24 19:06:38.314716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.945 [2024-07-24 19:06:38.318297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.945 [2024-07-24 19:06:38.327568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.945 [2024-07-24 19:06:38.328044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.945 [2024-07-24 19:06:38.328144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.945 [2024-07-24 19:06:38.328164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.945 [2024-07-24 19:06:38.328402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.946 [2024-07-24 19:06:38.328645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.946 [2024-07-24 19:06:38.328669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.946 [2024-07-24 19:06:38.328683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.946 [2024-07-24 19:06:38.332258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.946 [2024-07-24 19:06:38.341533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.946 [2024-07-24 19:06:38.341973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.946 [2024-07-24 19:06:38.342004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.946 [2024-07-24 19:06:38.342021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.946 [2024-07-24 19:06:38.342271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.946 [2024-07-24 19:06:38.342515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.946 [2024-07-24 19:06:38.342539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.946 [2024-07-24 19:06:38.342554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.946 [2024-07-24 19:06:38.346129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.946 [2024-07-24 19:06:38.355407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.946 [2024-07-24 19:06:38.355816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.946 [2024-07-24 19:06:38.355847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.946 [2024-07-24 19:06:38.355865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.946 [2024-07-24 19:06:38.356114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.946 [2024-07-24 19:06:38.356358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.946 [2024-07-24 19:06:38.356381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.946 [2024-07-24 19:06:38.356402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.946 [2024-07-24 19:06:38.359972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.946 [2024-07-24 19:06:38.369252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.946 [2024-07-24 19:06:38.369657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.946 [2024-07-24 19:06:38.369689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.946 [2024-07-24 19:06:38.369706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.946 [2024-07-24 19:06:38.369944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.946 [2024-07-24 19:06:38.370197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.946 [2024-07-24 19:06:38.370221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.946 [2024-07-24 19:06:38.370236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.946 [2024-07-24 19:06:38.373802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.946 [2024-07-24 19:06:38.383076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.946 [2024-07-24 19:06:38.383515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.946 [2024-07-24 19:06:38.383546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.946 [2024-07-24 19:06:38.383563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.946 [2024-07-24 19:06:38.383800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.946 [2024-07-24 19:06:38.384043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.946 [2024-07-24 19:06:38.384066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.946 [2024-07-24 19:06:38.384081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.946 [2024-07-24 19:06:38.387660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.946 [2024-07-24 19:06:38.396929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.946 [2024-07-24 19:06:38.397344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.946 [2024-07-24 19:06:38.397376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.946 [2024-07-24 19:06:38.397394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.946 [2024-07-24 19:06:38.397632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.946 [2024-07-24 19:06:38.397874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.946 [2024-07-24 19:06:38.397898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.946 [2024-07-24 19:06:38.397913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.946 [2024-07-24 19:06:38.401502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.946 [2024-07-24 19:06:38.410791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.946 [2024-07-24 19:06:38.411210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.946 [2024-07-24 19:06:38.411241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.946 [2024-07-24 19:06:38.411259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.946 [2024-07-24 19:06:38.411497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.946 [2024-07-24 19:06:38.411739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.946 [2024-07-24 19:06:38.411762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.946 [2024-07-24 19:06:38.411777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.946 [2024-07-24 19:06:38.415358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.946 [2024-07-24 19:06:38.424840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.946 [2024-07-24 19:06:38.425233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.946 [2024-07-24 19:06:38.425265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.946 [2024-07-24 19:06:38.425283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.946 [2024-07-24 19:06:38.425521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.946 [2024-07-24 19:06:38.425763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.946 [2024-07-24 19:06:38.425787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.946 [2024-07-24 19:06:38.425801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.946 [2024-07-24 19:06:38.429382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.946 [2024-07-24 19:06:38.438880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.946 [2024-07-24 19:06:38.439294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.946 [2024-07-24 19:06:38.439325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.946 [2024-07-24 19:06:38.439342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.946 [2024-07-24 19:06:38.439580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.946 [2024-07-24 19:06:38.439822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.946 [2024-07-24 19:06:38.439845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.946 [2024-07-24 19:06:38.439860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.946 [2024-07-24 19:06:38.443450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.946 [2024-07-24 19:06:38.452734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.946 [2024-07-24 19:06:38.453169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.946 [2024-07-24 19:06:38.453201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.946 [2024-07-24 19:06:38.453218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.946 [2024-07-24 19:06:38.453465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.946 [2024-07-24 19:06:38.453708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.946 [2024-07-24 19:06:38.453731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.946 [2024-07-24 19:06:38.453746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.946 [2024-07-24 19:06:38.457325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.946 [2024-07-24 19:06:38.466591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.946 [2024-07-24 19:06:38.467018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.946 [2024-07-24 19:06:38.467048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.946 [2024-07-24 19:06:38.467065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.946 [2024-07-24 19:06:38.467313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.947 [2024-07-24 19:06:38.467556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.947 [2024-07-24 19:06:38.467579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.947 [2024-07-24 19:06:38.467594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.947 [2024-07-24 19:06:38.471169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.947 [2024-07-24 19:06:38.480440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.947 [2024-07-24 19:06:38.480868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.947 [2024-07-24 19:06:38.480898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.947 [2024-07-24 19:06:38.480916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.947 [2024-07-24 19:06:38.481166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.947 [2024-07-24 19:06:38.481409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.947 [2024-07-24 19:06:38.481432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.947 [2024-07-24 19:06:38.481447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.947 [2024-07-24 19:06:38.485016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.947 [2024-07-24 19:06:38.494293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.947 [2024-07-24 19:06:38.494701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.947 [2024-07-24 19:06:38.494732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.947 [2024-07-24 19:06:38.494749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.947 [2024-07-24 19:06:38.494986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.947 [2024-07-24 19:06:38.495241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.947 [2024-07-24 19:06:38.495265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.947 [2024-07-24 19:06:38.495286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.947 [2024-07-24 19:06:38.498857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.947 [2024-07-24 19:06:38.508137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.947 [2024-07-24 19:06:38.508551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.947 [2024-07-24 19:06:38.508582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.947 [2024-07-24 19:06:38.508600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.947 [2024-07-24 19:06:38.508838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.947 [2024-07-24 19:06:38.509080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.947 [2024-07-24 19:06:38.509116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.947 [2024-07-24 19:06:38.509134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.947 [2024-07-24 19:06:38.512707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.947 [2024-07-24 19:06:38.521987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.947 [2024-07-24 19:06:38.522375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.947 [2024-07-24 19:06:38.522405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.947 [2024-07-24 19:06:38.522423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.947 [2024-07-24 19:06:38.522660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.947 [2024-07-24 19:06:38.522902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.947 [2024-07-24 19:06:38.522925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.947 [2024-07-24 19:06:38.522940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.947 [2024-07-24 19:06:38.526518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.947 [2024-07-24 19:06:38.535996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.947 [2024-07-24 19:06:38.536409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.947 [2024-07-24 19:06:38.536440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:00.947 [2024-07-24 19:06:38.536457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:00.947 [2024-07-24 19:06:38.536694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:00.947 [2024-07-24 19:06:38.536936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.947 [2024-07-24 19:06:38.536959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.947 [2024-07-24 19:06:38.536974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.947 [2024-07-24 19:06:38.540562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.206 [2024-07-24 19:06:38.549832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.207 [2024-07-24 19:06:38.550274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.207 [2024-07-24 19:06:38.550310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.207 [2024-07-24 19:06:38.550329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.207 [2024-07-24 19:06:38.550567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.207 [2024-07-24 19:06:38.550809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.207 [2024-07-24 19:06:38.550832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.207 [2024-07-24 19:06:38.550847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.207 [2024-07-24 19:06:38.554441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.207 [2024-07-24 19:06:38.563710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.207 [2024-07-24 19:06:38.564139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.207 [2024-07-24 19:06:38.564170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.207 [2024-07-24 19:06:38.564188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.207 [2024-07-24 19:06:38.564426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.207 [2024-07-24 19:06:38.564667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.207 [2024-07-24 19:06:38.564690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.207 [2024-07-24 19:06:38.564705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.207 [2024-07-24 19:06:38.568286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.207 [2024-07-24 19:06:38.577550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.207 [2024-07-24 19:06:38.577991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.207 [2024-07-24 19:06:38.578022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.207 [2024-07-24 19:06:38.578039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.207 [2024-07-24 19:06:38.578287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.207 [2024-07-24 19:06:38.578530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.207 [2024-07-24 19:06:38.578553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.207 [2024-07-24 19:06:38.578568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.207 [2024-07-24 19:06:38.582143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.207 [2024-07-24 19:06:38.591412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.207 [2024-07-24 19:06:38.591818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.207 [2024-07-24 19:06:38.591850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.207 [2024-07-24 19:06:38.591867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.207 [2024-07-24 19:06:38.592114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.207 [2024-07-24 19:06:38.592374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.207 [2024-07-24 19:06:38.592398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.207 [2024-07-24 19:06:38.592413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.207 [2024-07-24 19:06:38.595982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.207 [2024-07-24 19:06:38.605260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.207 [2024-07-24 19:06:38.605692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.207 [2024-07-24 19:06:38.605723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.207 [2024-07-24 19:06:38.605741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.207 [2024-07-24 19:06:38.605979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.207 [2024-07-24 19:06:38.606235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.207 [2024-07-24 19:06:38.606259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.207 [2024-07-24 19:06:38.606275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.207 [2024-07-24 19:06:38.609842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.207 [2024-07-24 19:06:38.619113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.207 [2024-07-24 19:06:38.619525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.207 [2024-07-24 19:06:38.619556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.207 [2024-07-24 19:06:38.619573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.207 [2024-07-24 19:06:38.619811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.207 [2024-07-24 19:06:38.620053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.207 [2024-07-24 19:06:38.620076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.207 [2024-07-24 19:06:38.620091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.207 [2024-07-24 19:06:38.623668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.207 [2024-07-24 19:06:38.632938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.207 [2024-07-24 19:06:38.633354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.207 [2024-07-24 19:06:38.633385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.207 [2024-07-24 19:06:38.633403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.207 [2024-07-24 19:06:38.633640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.207 [2024-07-24 19:06:38.633883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.207 [2024-07-24 19:06:38.633905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.207 [2024-07-24 19:06:38.633920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.207 [2024-07-24 19:06:38.637504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.207 [2024-07-24 19:06:38.646779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.207 [2024-07-24 19:06:38.647182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.207 [2024-07-24 19:06:38.647213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.207 [2024-07-24 19:06:38.647230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.207 [2024-07-24 19:06:38.647468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.207 [2024-07-24 19:06:38.647711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.207 [2024-07-24 19:06:38.647734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.207 [2024-07-24 19:06:38.647748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.207 [2024-07-24 19:06:38.651326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.207 [2024-07-24 19:06:38.660810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.207 [2024-07-24 19:06:38.661238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.207 [2024-07-24 19:06:38.661269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.207 [2024-07-24 19:06:38.661286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.207 [2024-07-24 19:06:38.661523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.207 [2024-07-24 19:06:38.661765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.207 [2024-07-24 19:06:38.661788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.207 [2024-07-24 19:06:38.661803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.207 [2024-07-24 19:06:38.665379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.207 [2024-07-24 19:06:38.674880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.207 [2024-07-24 19:06:38.675271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.207 [2024-07-24 19:06:38.675303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.207 [2024-07-24 19:06:38.675320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.207 [2024-07-24 19:06:38.675558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.207 [2024-07-24 19:06:38.675800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.207 [2024-07-24 19:06:38.675823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.208 [2024-07-24 19:06:38.675837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.208 [2024-07-24 19:06:38.679418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.208 [2024-07-24 19:06:38.688891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.208 [2024-07-24 19:06:38.689331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.208 [2024-07-24 19:06:38.689362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.208 [2024-07-24 19:06:38.689386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.208 [2024-07-24 19:06:38.689624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.208 [2024-07-24 19:06:38.689866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.208 [2024-07-24 19:06:38.689890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.208 [2024-07-24 19:06:38.689904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.208 [2024-07-24 19:06:38.693489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.208 [2024-07-24 19:06:38.702759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.208 [2024-07-24 19:06:38.703175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.208 [2024-07-24 19:06:38.703206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.208 [2024-07-24 19:06:38.703224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.208 [2024-07-24 19:06:38.703461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.208 [2024-07-24 19:06:38.703704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.208 [2024-07-24 19:06:38.703727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.208 [2024-07-24 19:06:38.703742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.208 [2024-07-24 19:06:38.707322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.208 [2024-07-24 19:06:38.716600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.208 [2024-07-24 19:06:38.717032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.208 [2024-07-24 19:06:38.717063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.208 [2024-07-24 19:06:38.717080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.208 [2024-07-24 19:06:38.717330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.208 [2024-07-24 19:06:38.717573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.208 [2024-07-24 19:06:38.717596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.208 [2024-07-24 19:06:38.717611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.208 [2024-07-24 19:06:38.721186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.208 [2024-07-24 19:06:38.730459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.208 [2024-07-24 19:06:38.730871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.208 [2024-07-24 19:06:38.730903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.208 [2024-07-24 19:06:38.730921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.208 [2024-07-24 19:06:38.731170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.208 [2024-07-24 19:06:38.731414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.208 [2024-07-24 19:06:38.731447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.208 [2024-07-24 19:06:38.731463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.208 [2024-07-24 19:06:38.735032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.208 [2024-07-24 19:06:38.744325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.208 [2024-07-24 19:06:38.744849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.208 [2024-07-24 19:06:38.744880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.208 [2024-07-24 19:06:38.744898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.208 [2024-07-24 19:06:38.745145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.208 [2024-07-24 19:06:38.745391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.208 [2024-07-24 19:06:38.745414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.208 [2024-07-24 19:06:38.745430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.208 [2024-07-24 19:06:38.749004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.208 [2024-07-24 19:06:38.758344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.208 [2024-07-24 19:06:38.758773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.208 [2024-07-24 19:06:38.758805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.208 [2024-07-24 19:06:38.758823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.208 [2024-07-24 19:06:38.759061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.208 [2024-07-24 19:06:38.759315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.208 [2024-07-24 19:06:38.759338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.208 [2024-07-24 19:06:38.759353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.208 [2024-07-24 19:06:38.762925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.208 [2024-07-24 19:06:38.772235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.208 [2024-07-24 19:06:38.772757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.208 [2024-07-24 19:06:38.772809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.208 [2024-07-24 19:06:38.772826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.208 [2024-07-24 19:06:38.773064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.208 [2024-07-24 19:06:38.773317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.208 [2024-07-24 19:06:38.773341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.208 [2024-07-24 19:06:38.773356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.208 [2024-07-24 19:06:38.776930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.208 [2024-07-24 19:06:38.786239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.208 [2024-07-24 19:06:38.786679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.208 [2024-07-24 19:06:38.786709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.208 [2024-07-24 19:06:38.786726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.208 [2024-07-24 19:06:38.786965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.208 [2024-07-24 19:06:38.787231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.208 [2024-07-24 19:06:38.787255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.208 [2024-07-24 19:06:38.787270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.208 [2024-07-24 19:06:38.790834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.208 [2024-07-24 19:06:38.800123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.208 [2024-07-24 19:06:38.800646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.208 [2024-07-24 19:06:38.800678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.208 [2024-07-24 19:06:38.800695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.208 [2024-07-24 19:06:38.800932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.208 [2024-07-24 19:06:38.801186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.208 [2024-07-24 19:06:38.801210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.208 [2024-07-24 19:06:38.801225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.208 [2024-07-24 19:06:38.804802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.468 [2024-07-24 19:06:38.814067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.468 [2024-07-24 19:06:38.814518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.468 [2024-07-24 19:06:38.814549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.468 [2024-07-24 19:06:38.814566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.468 [2024-07-24 19:06:38.814804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.468 [2024-07-24 19:06:38.815047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.468 [2024-07-24 19:06:38.815070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.468 [2024-07-24 19:06:38.815085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.468 [2024-07-24 19:06:38.818663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.468 [2024-07-24 19:06:38.827950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.468 [2024-07-24 19:06:38.828365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.468 [2024-07-24 19:06:38.828396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.468 [2024-07-24 19:06:38.828413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.468 [2024-07-24 19:06:38.828657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.468 [2024-07-24 19:06:38.828900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.468 [2024-07-24 19:06:38.828923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.468 [2024-07-24 19:06:38.828938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.468 [2024-07-24 19:06:38.832513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.468 [2024-07-24 19:06:38.841797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.468 [2024-07-24 19:06:38.842194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.468 [2024-07-24 19:06:38.842225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.468 [2024-07-24 19:06:38.842243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.468 [2024-07-24 19:06:38.842481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.468 [2024-07-24 19:06:38.842723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.468 [2024-07-24 19:06:38.842746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.468 [2024-07-24 19:06:38.842762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.468 [2024-07-24 19:06:38.846339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.468 [2024-07-24 19:06:38.855838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.468 [2024-07-24 19:06:38.856273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.468 [2024-07-24 19:06:38.856304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.468 [2024-07-24 19:06:38.856321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.468 [2024-07-24 19:06:38.856559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.468 [2024-07-24 19:06:38.856801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.468 [2024-07-24 19:06:38.856824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.468 [2024-07-24 19:06:38.856839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.468 [2024-07-24 19:06:38.860426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.468 [2024-07-24 19:06:38.869689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.468 [2024-07-24 19:06:38.870119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.468 [2024-07-24 19:06:38.870151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.468 [2024-07-24 19:06:38.870168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.468 [2024-07-24 19:06:38.870406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.468 [2024-07-24 19:06:38.870649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.468 [2024-07-24 19:06:38.870672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.468 [2024-07-24 19:06:38.870693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.468 [2024-07-24 19:06:38.874267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.468 [2024-07-24 19:06:38.883528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.468 [2024-07-24 19:06:38.883910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.468 [2024-07-24 19:06:38.883941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.468 [2024-07-24 19:06:38.883958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.468 [2024-07-24 19:06:38.884206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.468 [2024-07-24 19:06:38.884448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.468 [2024-07-24 19:06:38.884472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.468 [2024-07-24 19:06:38.884487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.468 [2024-07-24 19:06:38.888050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.468 [2024-07-24 19:06:38.897531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.468 [2024-07-24 19:06:38.897966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.468 [2024-07-24 19:06:38.897997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.468 [2024-07-24 19:06:38.898014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.468 [2024-07-24 19:06:38.898261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.468 [2024-07-24 19:06:38.898503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.468 [2024-07-24 19:06:38.898527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.468 [2024-07-24 19:06:38.898541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.468 [2024-07-24 19:06:38.902123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.468 [2024-07-24 19:06:38.911386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.468 [2024-07-24 19:06:38.911814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.468 [2024-07-24 19:06:38.911844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.468 [2024-07-24 19:06:38.911861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.468 [2024-07-24 19:06:38.912098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.468 [2024-07-24 19:06:38.912355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.468 [2024-07-24 19:06:38.912378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.468 [2024-07-24 19:06:38.912393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.468 [2024-07-24 19:06:38.915961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.468 [2024-07-24 19:06:38.925268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.468 [2024-07-24 19:06:38.925687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.468 [2024-07-24 19:06:38.925718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.468 [2024-07-24 19:06:38.925736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.469 [2024-07-24 19:06:38.925974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.469 [2024-07-24 19:06:38.926229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.469 [2024-07-24 19:06:38.926253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.469 [2024-07-24 19:06:38.926268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.469 [2024-07-24 19:06:38.929840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.469 [2024-07-24 19:06:38.939121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.469 [2024-07-24 19:06:38.939551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.469 [2024-07-24 19:06:38.939582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.469 [2024-07-24 19:06:38.939599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.469 [2024-07-24 19:06:38.939837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.469 [2024-07-24 19:06:38.940079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.469 [2024-07-24 19:06:38.940110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.469 [2024-07-24 19:06:38.940127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.469 [2024-07-24 19:06:38.943710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.469 [2024-07-24 19:06:38.952991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.469 [2024-07-24 19:06:38.953415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.469 [2024-07-24 19:06:38.953447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.469 [2024-07-24 19:06:38.953464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.469 [2024-07-24 19:06:38.953702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.469 [2024-07-24 19:06:38.953945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.469 [2024-07-24 19:06:38.953969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.469 [2024-07-24 19:06:38.953984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.469 [2024-07-24 19:06:38.957556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.469 [2024-07-24 19:06:38.967028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.469 [2024-07-24 19:06:38.967445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.469 [2024-07-24 19:06:38.967477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.469 [2024-07-24 19:06:38.967493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.469 [2024-07-24 19:06:38.967737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.469 [2024-07-24 19:06:38.967980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.469 [2024-07-24 19:06:38.968003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.469 [2024-07-24 19:06:38.968018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.469 [2024-07-24 19:06:38.971592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.469 [2024-07-24 19:06:38.980864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.469 [2024-07-24 19:06:38.981297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.469 [2024-07-24 19:06:38.981328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.469 [2024-07-24 19:06:38.981345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.469 [2024-07-24 19:06:38.981582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.469 [2024-07-24 19:06:38.981824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.469 [2024-07-24 19:06:38.981847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.469 [2024-07-24 19:06:38.981861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.469 [2024-07-24 19:06:38.985456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.469 [2024-07-24 19:06:38.994730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.469 [2024-07-24 19:06:38.995147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.469 [2024-07-24 19:06:38.995185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.469 [2024-07-24 19:06:38.995203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.469 [2024-07-24 19:06:38.995441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.469 [2024-07-24 19:06:38.995683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.469 [2024-07-24 19:06:38.995706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.469 [2024-07-24 19:06:38.995721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.469 [2024-07-24 19:06:38.999295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.469 [2024-07-24 19:06:39.008773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.469 [2024-07-24 19:06:39.009176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.469 [2024-07-24 19:06:39.009208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.469 [2024-07-24 19:06:39.009225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.469 [2024-07-24 19:06:39.009463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.469 [2024-07-24 19:06:39.009705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.469 [2024-07-24 19:06:39.009728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.469 [2024-07-24 19:06:39.009748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.469 [2024-07-24 19:06:39.013326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.469 [2024-07-24 19:06:39.022795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.469 [2024-07-24 19:06:39.023230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.469 [2024-07-24 19:06:39.023262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.469 [2024-07-24 19:06:39.023279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.469 [2024-07-24 19:06:39.023517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.469 [2024-07-24 19:06:39.023759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.469 [2024-07-24 19:06:39.023782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.469 [2024-07-24 19:06:39.023797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.469 [2024-07-24 19:06:39.027372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.469 [2024-07-24 19:06:39.036632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.469 [2024-07-24 19:06:39.037063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.469 [2024-07-24 19:06:39.037094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.469 [2024-07-24 19:06:39.037122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.469 [2024-07-24 19:06:39.037361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.469 [2024-07-24 19:06:39.037603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.469 [2024-07-24 19:06:39.037626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.469 [2024-07-24 19:06:39.037641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.469 [2024-07-24 19:06:39.041222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.469 [2024-07-24 19:06:39.050481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.469 [2024-07-24 19:06:39.050913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.469 [2024-07-24 19:06:39.050944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.469 [2024-07-24 19:06:39.050962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.469 [2024-07-24 19:06:39.051211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.469 [2024-07-24 19:06:39.051454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.469 [2024-07-24 19:06:39.051478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.469 [2024-07-24 19:06:39.051492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.469 [2024-07-24 19:06:39.055069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.469 [2024-07-24 19:06:39.064337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.469 [2024-07-24 19:06:39.064746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.469 [2024-07-24 19:06:39.064782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.469 [2024-07-24 19:06:39.064801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.469 [2024-07-24 19:06:39.065038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.469 [2024-07-24 19:06:39.065293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.470 [2024-07-24 19:06:39.065317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.470 [2024-07-24 19:06:39.065332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.729 [2024-07-24 19:06:39.068896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.729 [2024-07-24 19:06:39.078368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.729 [2024-07-24 19:06:39.078806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.729 [2024-07-24 19:06:39.078837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.729 [2024-07-24 19:06:39.078854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.729 [2024-07-24 19:06:39.079091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.729 [2024-07-24 19:06:39.079344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.729 [2024-07-24 19:06:39.079368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.729 [2024-07-24 19:06:39.079383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.729 [2024-07-24 19:06:39.082946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.729 [2024-07-24 19:06:39.092212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.729 [2024-07-24 19:06:39.092631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.729 [2024-07-24 19:06:39.092662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.729 [2024-07-24 19:06:39.092679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.729 [2024-07-24 19:06:39.092916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.729 [2024-07-24 19:06:39.093173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.729 [2024-07-24 19:06:39.093197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.729 [2024-07-24 19:06:39.093212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.729 [2024-07-24 19:06:39.096792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.729 [2024-07-24 19:06:39.106051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.729 [2024-07-24 19:06:39.106478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.729 [2024-07-24 19:06:39.106510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.729 [2024-07-24 19:06:39.106528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.729 [2024-07-24 19:06:39.106766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.729 [2024-07-24 19:06:39.107014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.729 [2024-07-24 19:06:39.107037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.729 [2024-07-24 19:06:39.107052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.729 [2024-07-24 19:06:39.110629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.729 [2024-07-24 19:06:39.119889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.729 [2024-07-24 19:06:39.120327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.729 [2024-07-24 19:06:39.120358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.729 [2024-07-24 19:06:39.120375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.729 [2024-07-24 19:06:39.120613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.729 [2024-07-24 19:06:39.120856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.729 [2024-07-24 19:06:39.120879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.729 [2024-07-24 19:06:39.120893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.729 [2024-07-24 19:06:39.124467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.729 [2024-07-24 19:06:39.133729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.729 [2024-07-24 19:06:39.134154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.729 [2024-07-24 19:06:39.134185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.729 [2024-07-24 19:06:39.134203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.729 [2024-07-24 19:06:39.134440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.729 [2024-07-24 19:06:39.134682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.729 [2024-07-24 19:06:39.134706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.729 [2024-07-24 19:06:39.134721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.729 [2024-07-24 19:06:39.138293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.729 [2024-07-24 19:06:39.147565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.729 [2024-07-24 19:06:39.147972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.729 [2024-07-24 19:06:39.148003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.729 [2024-07-24 19:06:39.148020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.729 [2024-07-24 19:06:39.148270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.729 [2024-07-24 19:06:39.148513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.729 [2024-07-24 19:06:39.148536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.730 [2024-07-24 19:06:39.148551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.730 [2024-07-24 19:06:39.152130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.730 [2024-07-24 19:06:39.161407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.730 [2024-07-24 19:06:39.161839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.730 [2024-07-24 19:06:39.161870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.730 [2024-07-24 19:06:39.161887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.730 [2024-07-24 19:06:39.162136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.730 [2024-07-24 19:06:39.162379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.730 [2024-07-24 19:06:39.162402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.730 [2024-07-24 19:06:39.162417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.730 [2024-07-24 19:06:39.165981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.730 [2024-07-24 19:06:39.175243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.730 [2024-07-24 19:06:39.175656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.730 [2024-07-24 19:06:39.175687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.730 [2024-07-24 19:06:39.175705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.730 [2024-07-24 19:06:39.175942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.730 [2024-07-24 19:06:39.176196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.730 [2024-07-24 19:06:39.176219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.730 [2024-07-24 19:06:39.176234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.730 [2024-07-24 19:06:39.179799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.730 [2024-07-24 19:06:39.189071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.730 [2024-07-24 19:06:39.189484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.730 [2024-07-24 19:06:39.189515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.730 [2024-07-24 19:06:39.189533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.730 [2024-07-24 19:06:39.189770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.730 [2024-07-24 19:06:39.190012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.730 [2024-07-24 19:06:39.190036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.730 [2024-07-24 19:06:39.190051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.730 [2024-07-24 19:06:39.193631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.730 [2024-07-24 19:06:39.203107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.730 [2024-07-24 19:06:39.203511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.730 [2024-07-24 19:06:39.203541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.730 [2024-07-24 19:06:39.203564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.730 [2024-07-24 19:06:39.203803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.730 [2024-07-24 19:06:39.204045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.730 [2024-07-24 19:06:39.204069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.730 [2024-07-24 19:06:39.204084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.730 [2024-07-24 19:06:39.207660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.730 [2024-07-24 19:06:39.217134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.730 [2024-07-24 19:06:39.217552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.730 [2024-07-24 19:06:39.217583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.730 [2024-07-24 19:06:39.217601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.730 [2024-07-24 19:06:39.217838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.730 [2024-07-24 19:06:39.218081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.730 [2024-07-24 19:06:39.218113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.730 [2024-07-24 19:06:39.218130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.730 [2024-07-24 19:06:39.221695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.730 [2024-07-24 19:06:39.231171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.730 [2024-07-24 19:06:39.231551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.730 [2024-07-24 19:06:39.231581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.730 [2024-07-24 19:06:39.231599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.730 [2024-07-24 19:06:39.231836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.730 [2024-07-24 19:06:39.232078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.730 [2024-07-24 19:06:39.232110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.730 [2024-07-24 19:06:39.232128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.730 [2024-07-24 19:06:39.235696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.730 [2024-07-24 19:06:39.245185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.730 [2024-07-24 19:06:39.245568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.730 [2024-07-24 19:06:39.245599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.730 [2024-07-24 19:06:39.245616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.730 [2024-07-24 19:06:39.245854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.730 [2024-07-24 19:06:39.246096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.730 [2024-07-24 19:06:39.246135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.730 [2024-07-24 19:06:39.246150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.730 [2024-07-24 19:06:39.249733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.730 [2024-07-24 19:06:39.259226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.730 [2024-07-24 19:06:39.259635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.730 [2024-07-24 19:06:39.259666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.730 [2024-07-24 19:06:39.259683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.730 [2024-07-24 19:06:39.259920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.730 [2024-07-24 19:06:39.260175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.730 [2024-07-24 19:06:39.260200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.730 [2024-07-24 19:06:39.260214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.730 [2024-07-24 19:06:39.263778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.730 [2024-07-24 19:06:39.273255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.730 [2024-07-24 19:06:39.273664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.730 [2024-07-24 19:06:39.273694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.730 [2024-07-24 19:06:39.273711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.730 [2024-07-24 19:06:39.273948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.730 [2024-07-24 19:06:39.274201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.730 [2024-07-24 19:06:39.274225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.730 [2024-07-24 19:06:39.274239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.730 [2024-07-24 19:06:39.277806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.730 [2024-07-24 19:06:39.287284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.730 [2024-07-24 19:06:39.287719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.730 [2024-07-24 19:06:39.287750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.731 [2024-07-24 19:06:39.287767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.731 [2024-07-24 19:06:39.288004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.731 [2024-07-24 19:06:39.288257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.731 [2024-07-24 19:06:39.288281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.731 [2024-07-24 19:06:39.288295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.731 [2024-07-24 19:06:39.291859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.731 [2024-07-24 19:06:39.301135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.731 [2024-07-24 19:06:39.301570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.731 [2024-07-24 19:06:39.301601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.731 [2024-07-24 19:06:39.301618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.731 [2024-07-24 19:06:39.301856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.731 [2024-07-24 19:06:39.302098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.731 [2024-07-24 19:06:39.302132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.731 [2024-07-24 19:06:39.302147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.731 [2024-07-24 19:06:39.305712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.731 [2024-07-24 19:06:39.314968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.731 [2024-07-24 19:06:39.315385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.731 [2024-07-24 19:06:39.315416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.731 [2024-07-24 19:06:39.315433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.731 [2024-07-24 19:06:39.315671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.731 [2024-07-24 19:06:39.315912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.731 [2024-07-24 19:06:39.315936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.731 [2024-07-24 19:06:39.315951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.731 [2024-07-24 19:06:39.319526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.731 [2024-07-24 19:06:39.328996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.731 [2024-07-24 19:06:39.329432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.731 [2024-07-24 19:06:39.329463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.731 [2024-07-24 19:06:39.329480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.731 [2024-07-24 19:06:39.329717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.991 [2024-07-24 19:06:39.329959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.991 [2024-07-24 19:06:39.329982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.991 [2024-07-24 19:06:39.329998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.991 [2024-07-24 19:06:39.333572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.991 [2024-07-24 19:06:39.342843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.991 [2024-07-24 19:06:39.343259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.991 [2024-07-24 19:06:39.343289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.991 [2024-07-24 19:06:39.343307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.991 [2024-07-24 19:06:39.343552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.991 [2024-07-24 19:06:39.343794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.991 [2024-07-24 19:06:39.343818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.991 [2024-07-24 19:06:39.343832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.991 [2024-07-24 19:06:39.347407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3248301 Killed "${NVMF_APP[@]}" "$@" 00:24:01.991 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:01.991 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:01.991 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:01.991 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:01.991 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.991 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3249254 00:24:01.991 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:01.991 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3249254 00:24:01.991 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3249254 ']' 00:24:01.991 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.991 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:01.991 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.991 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:01.991 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.991 [2024-07-24 19:06:39.356900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.991 [2024-07-24 19:06:39.357323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.991 [2024-07-24 19:06:39.357354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.991 [2024-07-24 19:06:39.357372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.991 [2024-07-24 19:06:39.357609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.991 [2024-07-24 19:06:39.357852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.991 [2024-07-24 19:06:39.357875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.991 [2024-07-24 19:06:39.357890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.991 [2024-07-24 19:06:39.361466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.991 [2024-07-24 19:06:39.370940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.991 [2024-07-24 19:06:39.371361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.991 [2024-07-24 19:06:39.371392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.991 [2024-07-24 19:06:39.371415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.991 [2024-07-24 19:06:39.371653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.991 [2024-07-24 19:06:39.371896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.991 [2024-07-24 19:06:39.371919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.991 [2024-07-24 19:06:39.371933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.991 [2024-07-24 19:06:39.375509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.991 [2024-07-24 19:06:39.384974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.991 [2024-07-24 19:06:39.385386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.991 [2024-07-24 19:06:39.385418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.992 [2024-07-24 19:06:39.385436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.992 [2024-07-24 19:06:39.385674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.992 [2024-07-24 19:06:39.385916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.992 [2024-07-24 19:06:39.385939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.992 [2024-07-24 19:06:39.385954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.992 [2024-07-24 19:06:39.389531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.992 [2024-07-24 19:06:39.399003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.992 [2024-07-24 19:06:39.399440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.992 [2024-07-24 19:06:39.399470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.992 [2024-07-24 19:06:39.399487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.992 [2024-07-24 19:06:39.399725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.992 [2024-07-24 19:06:39.399967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.992 [2024-07-24 19:06:39.399990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.992 [2024-07-24 19:06:39.400005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.992 [2024-07-24 19:06:39.403576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.992 [2024-07-24 19:06:39.403880] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:24:01.992 [2024-07-24 19:06:39.403959] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.992 [2024-07-24 19:06:39.412831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.992 [2024-07-24 19:06:39.413229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.992 [2024-07-24 19:06:39.413260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.992 [2024-07-24 19:06:39.413277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.992 [2024-07-24 19:06:39.413521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.992 [2024-07-24 19:06:39.413764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.992 [2024-07-24 19:06:39.413787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.992 [2024-07-24 19:06:39.413802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.992 [2024-07-24 19:06:39.417379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.992 [2024-07-24 19:06:39.426834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.992 [2024-07-24 19:06:39.427288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.992 [2024-07-24 19:06:39.427319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.992 [2024-07-24 19:06:39.427336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.992 [2024-07-24 19:06:39.427575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.992 [2024-07-24 19:06:39.427817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.992 [2024-07-24 19:06:39.427840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.992 [2024-07-24 19:06:39.427855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.992 [2024-07-24 19:06:39.431428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.992 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.992 [2024-07-24 19:06:39.440695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.992 [2024-07-24 19:06:39.441113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.992 [2024-07-24 19:06:39.441145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.992 [2024-07-24 19:06:39.441172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.992 [2024-07-24 19:06:39.441414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.992 [2024-07-24 19:06:39.441656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.992 [2024-07-24 19:06:39.441679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.992 [2024-07-24 19:06:39.441694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.992 [2024-07-24 19:06:39.445270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.992 [2024-07-24 19:06:39.454028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.992 [2024-07-24 19:06:39.454515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.992 [2024-07-24 19:06:39.454557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.992 [2024-07-24 19:06:39.454573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.992 [2024-07-24 19:06:39.454812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.992 [2024-07-24 19:06:39.455026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.992 [2024-07-24 19:06:39.455046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.992 [2024-07-24 19:06:39.455064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.992 [2024-07-24 19:06:39.458107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.992 [2024-07-24 19:06:39.467431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.992 [2024-07-24 19:06:39.467904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.992 [2024-07-24 19:06:39.467931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.992 [2024-07-24 19:06:39.467947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.992 [2024-07-24 19:06:39.468170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.992 [2024-07-24 19:06:39.468402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.992 [2024-07-24 19:06:39.468422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.992 [2024-07-24 19:06:39.468434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.992 [2024-07-24 19:06:39.470777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:01.992 [2024-07-24 19:06:39.471615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.992 [2024-07-24 19:06:39.480736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.992 [2024-07-24 19:06:39.481342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.992 [2024-07-24 19:06:39.481381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.992 [2024-07-24 19:06:39.481399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.992 [2024-07-24 19:06:39.481643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.992 [2024-07-24 19:06:39.481852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.992 [2024-07-24 19:06:39.481872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.992 [2024-07-24 19:06:39.481887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.992 [2024-07-24 19:06:39.484928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.992 [2024-07-24 19:06:39.494158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.992 [2024-07-24 19:06:39.494595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.992 [2024-07-24 19:06:39.494626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.992 [2024-07-24 19:06:39.494643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.992 [2024-07-24 19:06:39.494888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.992 [2024-07-24 19:06:39.495093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.992 [2024-07-24 19:06:39.495133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.992 [2024-07-24 19:06:39.495147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.992 [2024-07-24 19:06:39.498190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.992 [2024-07-24 19:06:39.507560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.992 [2024-07-24 19:06:39.507982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.992 [2024-07-24 19:06:39.508010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.992 [2024-07-24 19:06:39.508026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.992 [2024-07-24 19:06:39.508265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.992 [2024-07-24 19:06:39.508490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.992 [2024-07-24 19:06:39.508510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.992 [2024-07-24 19:06:39.508524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.992 [2024-07-24 19:06:39.511585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.993 [2024-07-24 19:06:39.520964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.993 [2024-07-24 19:06:39.521369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.993 [2024-07-24 19:06:39.521413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.993 [2024-07-24 19:06:39.521429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.993 [2024-07-24 19:06:39.521687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.993 [2024-07-24 19:06:39.521892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.993 [2024-07-24 19:06:39.521912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.993 [2024-07-24 19:06:39.521925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.993 [2024-07-24 19:06:39.524997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.993 [2024-07-24 19:06:39.534452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.993 [2024-07-24 19:06:39.534989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.993 [2024-07-24 19:06:39.535027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.993 [2024-07-24 19:06:39.535046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.993 [2024-07-24 19:06:39.535309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.993 [2024-07-24 19:06:39.535527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.993 [2024-07-24 19:06:39.535547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.993 [2024-07-24 19:06:39.535562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.993 [2024-07-24 19:06:39.538630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.993 [2024-07-24 19:06:39.547820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.993 [2024-07-24 19:06:39.548270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.993 [2024-07-24 19:06:39.548300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.993 [2024-07-24 19:06:39.548317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.993 [2024-07-24 19:06:39.548571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.993 [2024-07-24 19:06:39.548778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.993 [2024-07-24 19:06:39.548798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.993 [2024-07-24 19:06:39.548811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.993 [2024-07-24 19:06:39.551859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.993 [2024-07-24 19:06:39.561270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.993 [2024-07-24 19:06:39.561672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.993 [2024-07-24 19:06:39.561700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.993 [2024-07-24 19:06:39.561716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.993 [2024-07-24 19:06:39.561959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.993 [2024-07-24 19:06:39.562175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.993 [2024-07-24 19:06:39.562196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.993 [2024-07-24 19:06:39.562208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.993 [2024-07-24 19:06:39.565253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.993 [2024-07-24 19:06:39.574671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.993 [2024-07-24 19:06:39.575083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.993 [2024-07-24 19:06:39.575117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.993 [2024-07-24 19:06:39.575134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.993 [2024-07-24 19:06:39.575362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.993 [2024-07-24 19:06:39.575583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.993 [2024-07-24 19:06:39.575602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.993 [2024-07-24 19:06:39.575615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.993 [2024-07-24 19:06:39.578447] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.993 [2024-07-24 19:06:39.578480] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.993 [2024-07-24 19:06:39.578494] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.993 [2024-07-24 19:06:39.578505] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.993 [2024-07-24 19:06:39.578515] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.993 [2024-07-24 19:06:39.578570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.993 [2024-07-24 19:06:39.578682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:01.993 [2024-07-24 19:06:39.578630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.993 [2024-07-24 19:06:39.578633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.993 [2024-07-24 19:06:39.588097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.993 [2024-07-24 19:06:39.588615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.993 [2024-07-24 19:06:39.588653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:01.993 [2024-07-24 19:06:39.588672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:01.993 [2024-07-24 19:06:39.588909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:01.993 [2024-07-24 19:06:39.589133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:01.993 [2024-07-24 19:06:39.589154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:01.993 [2024-07-24 19:06:39.589184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.253 [2024-07-24 19:06:39.592399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.253 [2024-07-24 19:06:39.601565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.253 [2024-07-24 19:06:39.602055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.253 [2024-07-24 19:06:39.602093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:02.253 [2024-07-24 19:06:39.602120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:02.253 [2024-07-24 19:06:39.602359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:02.253 [2024-07-24 19:06:39.602576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.253 [2024-07-24 19:06:39.602597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.253 [2024-07-24 19:06:39.602612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.253 [2024-07-24 19:06:39.605720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.253 [2024-07-24 19:06:39.615091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.253 [2024-07-24 19:06:39.615632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.253 [2024-07-24 19:06:39.615671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:02.253 [2024-07-24 19:06:39.615690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:02.253 [2024-07-24 19:06:39.615927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:02.253 [2024-07-24 19:06:39.616151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.253 [2024-07-24 19:06:39.616173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.253 [2024-07-24 19:06:39.616188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.253 [2024-07-24 19:06:39.619314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.253 [2024-07-24 19:06:39.628565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.253 [2024-07-24 19:06:39.629118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.253 [2024-07-24 19:06:39.629156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:02.253 [2024-07-24 19:06:39.629175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:02.253 [2024-07-24 19:06:39.629408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:02.253 [2024-07-24 19:06:39.629631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.253 [2024-07-24 19:06:39.629653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.253 [2024-07-24 19:06:39.629669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.254 [2024-07-24 19:06:39.632884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.254 [2024-07-24 19:06:39.642255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.254 [2024-07-24 19:06:39.642727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.254 [2024-07-24 19:06:39.642762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:02.254 [2024-07-24 19:06:39.642780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:02.254 [2024-07-24 19:06:39.643017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:02.254 [2024-07-24 19:06:39.643265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.254 [2024-07-24 19:06:39.643288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.254 [2024-07-24 19:06:39.643303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.254 [2024-07-24 19:06:39.646551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.254 [2024-07-24 19:06:39.655857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.254 [2024-07-24 19:06:39.656400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.254 [2024-07-24 19:06:39.656440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:02.254 [2024-07-24 19:06:39.656460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:02.254 [2024-07-24 19:06:39.656698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:02.254 [2024-07-24 19:06:39.656915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.254 [2024-07-24 19:06:39.656936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.254 [2024-07-24 19:06:39.656951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.254 [2024-07-24 19:06:39.660058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.254 [2024-07-24 19:06:39.669426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.254 [2024-07-24 19:06:39.669856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.254 [2024-07-24 19:06:39.669888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:02.254 [2024-07-24 19:06:39.669904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:02.254 [2024-07-24 19:06:39.670145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:02.254 [2024-07-24 19:06:39.670359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.254 [2024-07-24 19:06:39.670379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.254 [2024-07-24 19:06:39.670402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.254 [2024-07-24 19:06:39.673505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.254 [2024-07-24 19:06:39.682880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.254 [2024-07-24 19:06:39.683270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.254 [2024-07-24 19:06:39.683298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:02.254 [2024-07-24 19:06:39.683314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:02.254 [2024-07-24 19:06:39.683528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:02.254 [2024-07-24 19:06:39.683747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.254 [2024-07-24 19:06:39.683768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.254 [2024-07-24 19:06:39.683781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.254 [2024-07-24 19:06:39.686989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.254 [2024-07-24 19:06:39.696468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.254 [2024-07-24 19:06:39.696821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.254 [2024-07-24 19:06:39.696849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:02.254 [2024-07-24 19:06:39.696864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:02.254 [2024-07-24 19:06:39.697079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:02.254 [2024-07-24 19:06:39.697390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.254 [2024-07-24 19:06:39.697413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.254 [2024-07-24 19:06:39.697427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.254 [2024-07-24 19:06:39.700672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.254 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:02.254 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:24:02.254 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:02.254 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:02.254 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:02.254 [2024-07-24 19:06:39.710021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.254 [2024-07-24 19:06:39.710454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.254 [2024-07-24 19:06:39.710484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:02.254 [2024-07-24 19:06:39.710500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:02.254 [2024-07-24 19:06:39.710714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:02.254 [2024-07-24 19:06:39.710942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.254 [2024-07-24 19:06:39.710964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.254 [2024-07-24 19:06:39.710984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.254 [2024-07-24 19:06:39.714241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.254 [2024-07-24 19:06:39.723559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.254 [2024-07-24 19:06:39.723939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.254 [2024-07-24 19:06:39.723967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:02.254 [2024-07-24 19:06:39.723983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:02.254 [2024-07-24 19:06:39.724220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:02.254 [2024-07-24 19:06:39.724434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.254 [2024-07-24 19:06:39.724454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.254 [2024-07-24 19:06:39.724467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.254 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.254 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:02.254 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.254 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:02.254 [2024-07-24 19:06:39.727572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.254 [2024-07-24 19:06:39.730675] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.254 [2024-07-24 19:06:39.736961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.254 [2024-07-24 19:06:39.737362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.254 [2024-07-24 19:06:39.737390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:02.254 [2024-07-24 19:06:39.737406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:02.254 [2024-07-24 19:06:39.737647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:02.254 [2024-07-24 19:06:39.737853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.254 [2024-07-24 19:06:39.737873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.254 [2024-07-24 19:06:39.737885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.254 [2024-07-24 19:06:39.741007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.254 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.254 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:02.254 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.254 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:02.254 [2024-07-24 19:06:39.750581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.254 [2024-07-24 19:06:39.750980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.254 [2024-07-24 19:06:39.751006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:02.254 [2024-07-24 19:06:39.751029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:02.254 [2024-07-24 19:06:39.751252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:02.254 [2024-07-24 19:06:39.751483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.254 [2024-07-24 19:06:39.751503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.255 [2024-07-24 19:06:39.751515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.255 [2024-07-24 19:06:39.754764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.255 [2024-07-24 19:06:39.764169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.255 [2024-07-24 19:06:39.764716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.255 [2024-07-24 19:06:39.764754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:02.255 [2024-07-24 19:06:39.764773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:02.255 [2024-07-24 19:06:39.764995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:02.255 [2024-07-24 19:06:39.765227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.255 [2024-07-24 19:06:39.765249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.255 [2024-07-24 19:06:39.765264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.255 [2024-07-24 19:06:39.768519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.255 Malloc0 00:24:02.255 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.255 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:02.255 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.255 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:02.255 [2024-07-24 19:06:39.777732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.255 [2024-07-24 19:06:39.778159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.255 [2024-07-24 19:06:39.778190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:02.255 [2024-07-24 19:06:39.778207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:02.255 [2024-07-24 19:06:39.778441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:02.255 [2024-07-24 19:06:39.778654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.255 [2024-07-24 19:06:39.778675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.255 [2024-07-24 19:06:39.778689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.255 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.255 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:02.255 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.255 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:02.255 [2024-07-24 19:06:39.781925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.255 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.255 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:02.255 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.255 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:02.255 [2024-07-24 19:06:39.791354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.255 [2024-07-24 19:06:39.791756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.255 [2024-07-24 19:06:39.791784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195ac0 with addr=10.0.0.2, port=4420 00:24:02.255 [2024-07-24 19:06:39.791800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2195ac0 is same with the state(5) to be set 00:24:02.255 [2024-07-24 19:06:39.792028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195ac0 (9): Bad file descriptor 00:24:02.255 [2024-07-24 19:06:39.792268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:02.255 [2024-07-24 19:06:39.792290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:02.255 [2024-07-24 19:06:39.792304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:02.255 [2024-07-24 19:06:39.793173] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.255 [2024-07-24 19:06:39.795557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:02.255 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.255 19:06:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3248588 00:24:02.255 [2024-07-24 19:06:39.804810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.513 [2024-07-24 19:06:39.880819] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:12.480 00:24:12.480 Latency(us) 00:24:12.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.480 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:12.480 Verification LBA range: start 0x0 length 0x4000 00:24:12.480 Nvme1n1 : 15.02 6613.48 25.83 8739.40 0.00 8311.98 819.20 17961.72 00:24:12.480 =================================================================================================================== 00:24:12.480 Total : 6613.48 25.83 8739.40 0.00 8311.98 819.20 17961.72 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:12.480 rmmod nvme_tcp 00:24:12.480 rmmod nvme_fabrics 00:24:12.480 rmmod nvme_keyring 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3249254 ']' 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3249254 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 3249254 ']' 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 3249254 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3249254 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3249254' 00:24:12.480 killing process with pid 3249254 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 3249254 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 3249254 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:12.480 19:06:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.413 19:06:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:14.413 00:24:14.413 real 0m22.448s 00:24:14.413 user 0m59.999s 00:24:14.413 sys 0m4.190s 00:24:14.413 19:06:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:14.414 ************************************ 00:24:14.414 END TEST nvmf_bdevperf 00:24:14.414 ************************************ 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.414 ************************************ 00:24:14.414 START TEST nvmf_target_disconnect 00:24:14.414 ************************************ 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:14.414 * Looking for test storage... 00:24:14.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:14.414 19:06:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:16.315 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:16.315 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:24:16.315 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:16.315 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:16.315 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:16.315 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:16.315 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:16.315 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:24:16.315 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:16.315 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:24:16.315 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:24:16.315 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:24:16.315 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:24:16.315 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:16.316 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:16.316 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:16.316 Found net devices under 0000:09:00.0: cvl_0_0 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:16.316 Found net devices under 0000:09:00.1: cvl_0_1 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:16.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:24:16.316 00:24:16.316 --- 10.0.0.2 ping statistics --- 00:24:16.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.316 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:24:16.316 00:24:16.316 --- 10.0.0.1 ping statistics --- 00:24:16.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.316 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:16.316 ************************************ 00:24:16.316 START TEST nvmf_target_disconnect_tc1 00:24:16.316 ************************************ 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:24:16.316 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:16.317 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:24:16.317 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:16.317 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:16.317 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.317 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:16.317 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.317 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:16.317 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:16.317 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:16.317 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:24:16.317 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:16.317 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.317 [2024-07-24 19:06:53.909070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.317 [2024-07-24 19:06:53.909185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e81a0 with addr=10.0.0.2, port=4420 00:24:16.317 [2024-07-24 19:06:53.909216] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:16.317 [2024-07-24 19:06:53.909238] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:16.317 [2024-07-24 19:06:53.909251] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:16.317 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:16.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:16.575 Initializing NVMe Controllers 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:16.575 00:24:16.575 real 0m0.098s 00:24:16.575 user 0m0.043s 00:24:16.575 sys 0m0.055s 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:16.575 ************************************ 00:24:16.575 END TEST nvmf_target_disconnect_tc1 00:24:16.575 ************************************ 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:16.575 ************************************ 00:24:16.575 START TEST nvmf_target_disconnect_tc2 00:24:16.575 ************************************ 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3252404 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3252404 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3252404 ']' 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:16.575 19:06:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:16.575 [2024-07-24 19:06:54.028170] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:24:16.575 [2024-07-24 19:06:54.028261] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.575 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.575 [2024-07-24 19:06:54.096768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:16.834 [2024-07-24 19:06:54.211058] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.834 [2024-07-24 19:06:54.211115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.834 [2024-07-24 19:06:54.211144] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.834 [2024-07-24 19:06:54.211156] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.834 [2024-07-24 19:06:54.211165] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.834 [2024-07-24 19:06:54.211275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:16.834 [2024-07-24 19:06:54.211339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:16.834 [2024-07-24 19:06:54.211404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:16.834 [2024-07-24 19:06:54.211407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:17.399 19:06:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:17.399 19:06:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:24:17.399 19:06:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:17.399 19:06:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:17.399 19:06:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:17.399 19:06:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.399 19:06:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:17.399 19:06:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.399 19:06:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:17.658 Malloc0 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:17.658 [2024-07-24 19:06:55.022301] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:17.658 [2024-07-24 19:06:55.050562] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3252557 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:17.658 19:06:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:17.658 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.567 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3252404 00:24:19.567 19:06:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:24:19.567 Read completed with error (sct=0, sc=8) 00:24:19.567 starting I/O failed 00:24:19.567 Read completed with error (sct=0, sc=8) 00:24:19.567 starting I/O failed 00:24:19.567 Read completed with error (sct=0, sc=8) 00:24:19.567 starting I/O failed 00:24:19.567 Read completed with error (sct=0, sc=8) 00:24:19.567 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Write completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Write completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Write completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Write completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Write completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Write completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Write completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Write completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Write completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Write completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Write completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Write completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 [2024-07-24 19:06:57.076179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Write completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Write completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Write completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Write completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Write completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Write completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 [2024-07-24 19:06:57.076478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.568 Read completed with error (sct=0, sc=8) 00:24:19.568 starting I/O failed 00:24:19.569 Read completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Read completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Read completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Read completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Read completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Read completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Write completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Read completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Read completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Read completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Write completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Write completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Read completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Read completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Write completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Read completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Read completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Read completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Write completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Read completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Read completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Read completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Write completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Read completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Write completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 Read completed with error (sct=0, sc=8) 00:24:19.569 starting I/O failed 00:24:19.569 [2024-07-24 19:06:57.076822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:19.569 [2024-07-24 19:06:57.077059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.569 [2024-07-24 19:06:57.077110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.569 qpair failed and we were unable to recover it. 00:24:19.569 [2024-07-24 19:06:57.077257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.569 [2024-07-24 19:06:57.077283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.569 qpair failed and we were unable to recover it. 00:24:19.569 [2024-07-24 19:06:57.077428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.569 [2024-07-24 19:06:57.077453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.569 qpair failed and we were unable to recover it. 00:24:19.569 [2024-07-24 19:06:57.077586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.569 [2024-07-24 19:06:57.077611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.569 qpair failed and we were unable to recover it. 00:24:19.569 [2024-07-24 19:06:57.077758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.569 [2024-07-24 19:06:57.077784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.569 qpair failed and we were unable to recover it. 00:24:19.569 [2024-07-24 19:06:57.077916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.569 [2024-07-24 19:06:57.077940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.569 qpair failed and we were unable to recover it. 00:24:19.569 [2024-07-24 19:06:57.078096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.569 [2024-07-24 19:06:57.078130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.569 qpair failed and we were unable to recover it. 00:24:19.569 [2024-07-24 19:06:57.078301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.569 [2024-07-24 19:06:57.078326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.569 qpair failed and we were unable to recover it. 00:24:19.569 [2024-07-24 19:06:57.078481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.569 [2024-07-24 19:06:57.078507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.569 qpair failed and we were unable to recover it. 00:24:19.569 [2024-07-24 19:06:57.078662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.569 [2024-07-24 19:06:57.078689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.569 qpair failed and we were unable to recover it. 00:24:19.569 [2024-07-24 19:06:57.078849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.569 [2024-07-24 19:06:57.078890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.569 qpair failed and we were unable to recover it. 00:24:19.569 [2024-07-24 19:06:57.079008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.569 [2024-07-24 19:06:57.079033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.569 qpair failed and we were unable to recover it. 00:24:19.569 [2024-07-24 19:06:57.079174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.569 [2024-07-24 19:06:57.079200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.569 qpair failed and we were unable to recover it. 00:24:19.569 [2024-07-24 19:06:57.079332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.569 [2024-07-24 19:06:57.079359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.569 qpair failed and we were unable to recover it. 00:24:19.569 [2024-07-24 19:06:57.079532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.079558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.079723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.079750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.079907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.079932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.080107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.080133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.080271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.080295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.080440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.080465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.080702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.080727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.080851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.080877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.081030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.081055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.081177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.081203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.081340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.081365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.081549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.081574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.081703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.081729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.081862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.081887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.082011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.082038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.082207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.082247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.082389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.082421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.082610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.082635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.082809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.082834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.082969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.082996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.083129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.083164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.083307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.083331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.083472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.083513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.083673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.083701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.084015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.570 [2024-07-24 19:06:57.084069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.570 qpair failed and we were unable to recover it. 00:24:19.570 [2024-07-24 19:06:57.084226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.084252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.084377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.084403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.084554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.084579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.084737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.084761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.084943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.084969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.085114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.085158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.085279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.085304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.085461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.085488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.085680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.085707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.085888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.085912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.086041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.086066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.086211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.086239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.086370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.086395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.087050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.087087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.087271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.087297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.087451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.087476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.087663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.087688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.087820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.087851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.087989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.088019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.088152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.088180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.088341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.088367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.088514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.088540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.088701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.088727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.088905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.088930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.571 qpair failed and we were unable to recover it. 00:24:19.571 [2024-07-24 19:06:57.089073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.571 [2024-07-24 19:06:57.089099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.089267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.089292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.089438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.089464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.089642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.089667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.089788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.089814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.089934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.089959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.090089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.090120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.090274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.090300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.090453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.090479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.090614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.090640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.090773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.090799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.090943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.090969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.091107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.091133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.091258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.091285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.091442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.091468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.091703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.091728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.091887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.091914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.092074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.092099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.092272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.092298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.092444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.092470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.092625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.092650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.092813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.092839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.093024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.093050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.093199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.093225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.572 [2024-07-24 19:06:57.093378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.572 [2024-07-24 19:06:57.093404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.572 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.093526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.093552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.093740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.093773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.093933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.093959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.094115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.094141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.094286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.094329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.094503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.094530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.094679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.094707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.094858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.094884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.095045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.095071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.095240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.095270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.095424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.095450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.095573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.095600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.095748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.095775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.095900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.095925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.096060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.096085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.096307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.096333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.096452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.096478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.096631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.096658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.096832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.096863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.097035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.097060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.097284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.097310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.097457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.097483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.097681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.097723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.097896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.097922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.573 qpair failed and we were unable to recover it. 00:24:19.573 [2024-07-24 19:06:57.098076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.573 [2024-07-24 19:06:57.098111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.098253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.098279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.098397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.098423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.098551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.098576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.098711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.098736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.098890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.098915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.099055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.099080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.099257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.099296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.099435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.099461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.099615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.099639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.099820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.099845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.099998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.100023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.100184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.100215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.100345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.100369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.100595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.100620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.100772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.100797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.100919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.100944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.101090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.101123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.101284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.101309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.101471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.101497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.101619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.101644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.101837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.101862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.102040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.102064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.102247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.102271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.102402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.102429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.574 qpair failed and we were unable to recover it. 00:24:19.574 [2024-07-24 19:06:57.102580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.574 [2024-07-24 19:06:57.102604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.575 qpair failed and we were unable to recover it. 00:24:19.575 [2024-07-24 19:06:57.102759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.575 [2024-07-24 19:06:57.102784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.575 qpair failed and we were unable to recover it. 00:24:19.575 [2024-07-24 19:06:57.102943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.575 [2024-07-24 19:06:57.102968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.575 qpair failed and we were unable to recover it. 00:24:19.575 [2024-07-24 19:06:57.103094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.575 [2024-07-24 19:06:57.103125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.575 qpair failed and we were unable to recover it. 00:24:19.575 [2024-07-24 19:06:57.103281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.575 [2024-07-24 19:06:57.103305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.575 qpair failed and we were unable to recover it. 00:24:19.575 [2024-07-24 19:06:57.103459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.575 [2024-07-24 19:06:57.103484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.575 qpair failed and we were unable to recover it. 00:24:19.575 [2024-07-24 19:06:57.103654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.575 [2024-07-24 19:06:57.103682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.575 qpair failed and we were unable to recover it. 00:24:19.575 [2024-07-24 19:06:57.103823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.575 [2024-07-24 19:06:57.103850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.575 qpair failed and we were unable to recover it. 00:24:19.575 [2024-07-24 19:06:57.104024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.575 [2024-07-24 19:06:57.104051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.575 qpair failed and we were unable to recover it. 00:24:19.575 [2024-07-24 19:06:57.104225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.575 [2024-07-24 19:06:57.104251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.575 qpair failed and we were unable to recover it. 00:24:19.575 [2024-07-24 19:06:57.104434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.575 [2024-07-24 19:06:57.104459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.575 qpair failed and we were unable to recover it. 00:24:19.575 [2024-07-24 19:06:57.104583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.575 [2024-07-24 19:06:57.104609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.575 qpair failed and we were unable to recover it. 00:24:19.575 [2024-07-24 19:06:57.104821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.575 [2024-07-24 19:06:57.104849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.575 qpair failed and we were unable to recover it. 00:24:19.575 [2024-07-24 19:06:57.105058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.575 [2024-07-24 19:06:57.105083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.575 qpair failed and we were unable to recover it. 00:24:19.575 [2024-07-24 19:06:57.105241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.575 [2024-07-24 19:06:57.105270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.575 qpair failed and we were unable to recover it. 00:24:19.575 [2024-07-24 19:06:57.105457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.575 [2024-07-24 19:06:57.105482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.575 qpair failed and we were unable to recover it. 00:24:19.575 [2024-07-24 19:06:57.105611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.575 [2024-07-24 19:06:57.105638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.575 qpair failed and we were unable to recover it. 00:24:19.575 [2024-07-24 19:06:57.105823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.575 [2024-07-24 19:06:57.105863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.575 qpair failed and we were unable to recover it. 00:24:19.575 [2024-07-24 19:06:57.106067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.575 [2024-07-24 19:06:57.106092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.575 qpair failed and we were unable to recover it. 00:24:19.575 [2024-07-24 19:06:57.106250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.106275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.106421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.106446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.106599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.106624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.106753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.106778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.106977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.107002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.107131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.107157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.107303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.107329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.107450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.107475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.107772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.107829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.108006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.108031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.108155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.108181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.108310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.108335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.108484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.108509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.108628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.108653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.108784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.108810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.109044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.109069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.109254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.109279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.109434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.109458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.109629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.109657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.109799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.109826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.109982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.110007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.110156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.110182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.110332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.110361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.110517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.110542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.110676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.110701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.110854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.110890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.111031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.111056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.576 qpair failed and we were unable to recover it. 00:24:19.576 [2024-07-24 19:06:57.111211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.576 [2024-07-24 19:06:57.111236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.111381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.111406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.111583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.111608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.111754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.111778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.111949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.111974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.112091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.112123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.112274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.112299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.112473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.112501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.112667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.112694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.112880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.112906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.113038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.113063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.113240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.113265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.113408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.113435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.113605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.113634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.113809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.113834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.114007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.114032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.114238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.114264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.114408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.114433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.114558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.114582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.114736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.114761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.114943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.114968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.115120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.115153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.115278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.115302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.115487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.115512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.115643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.115670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.115841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.115868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.116020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.116045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.116197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.116241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.116407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.116435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.116638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.116663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.116845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.116870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.116993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.117018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.117197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.117222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.577 [2024-07-24 19:06:57.117348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.577 [2024-07-24 19:06:57.117372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.577 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.117495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.117522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.117655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.117680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.117856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.117882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.118026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.118051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.118207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.118232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.118406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.118430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.118580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.118605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.118762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.118787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.118984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.119011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.119213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.119238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.119366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.119390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.119541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.119566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.119714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.119738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.119861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.119884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.120033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.120056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.120214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.120238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.120459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.120484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.120628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.120653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.120788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.120812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.120970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.120994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.121176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.121202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.121357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.121382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.121530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.121554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.121709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.121734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.121863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.121888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.122038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.122062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.122223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.122248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.122399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.122423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.122600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.122624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.122772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.122800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.122930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.122955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.123108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.123132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.123260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.123284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.123461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.123486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.123628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.123652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.123774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.123798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.123974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.123999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.578 qpair failed and we were unable to recover it. 00:24:19.578 [2024-07-24 19:06:57.124129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.578 [2024-07-24 19:06:57.124153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.124307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.124331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.124523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.124548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.124702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.124726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.124850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.124876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.125029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.125054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.125197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.125223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.125400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.125424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.125550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.125591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.125796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.125821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.125969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.125993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.126142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.126170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.126375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.126400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.126569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.126596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.126746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.126770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.126896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.126920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.127071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.127096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.127274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.127298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.127446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.127469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.127647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.127675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.127833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.127858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.128007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.128031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.128186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.128211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.128389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.128414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.128563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.128587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.128752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.128776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.128931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.128955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.129111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.129137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.129272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.129297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.129448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.129473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.129651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.129675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.129850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.129875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.130029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.130053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.130248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.130273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.130448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.130473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.130600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.130625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.130799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.130823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.130974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.579 [2024-07-24 19:06:57.130998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.579 qpair failed and we were unable to recover it. 00:24:19.579 [2024-07-24 19:06:57.131176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.131220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.131397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.131422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.131554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.131580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.131753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.131778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.131953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.131978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.132118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.132143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.132316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.132343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.132491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.132515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.132668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.132696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.132846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.132871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.133027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.133052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.133185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.133209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.133361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.133386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.133512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.133536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.133663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.133688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.133812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.133836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.133968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.133992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.134178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.134204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.134331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.134356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.134476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.134500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.134667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.134692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.134835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.134860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.135037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.135062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.135269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.135297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.135489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.135514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.135666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.135691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.135839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.135863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.136054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.136081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.136251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.136276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.136422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.136446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.136636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.580 [2024-07-24 19:06:57.136661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.580 qpair failed and we were unable to recover it. 00:24:19.580 [2024-07-24 19:06:57.136810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.136835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.136967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.136991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.137118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.137143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.137328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.137353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.137505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.137529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.137692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.137717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.137867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.137892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.138062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.138086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.138266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.138293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.138485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.138510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.138680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.138708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.138889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.138913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.139038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.139062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.139252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.139278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.139406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.139431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.139561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.139585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.139759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.139784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.139910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.139935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.140110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.140139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.140272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.140297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.140476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.140500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.140648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.140672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.140861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.140886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.141016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.141040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.141171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.141196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.141325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.141349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.141482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.141505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.141677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.141701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.141899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.581 [2024-07-24 19:06:57.141927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.581 qpair failed and we were unable to recover it. 00:24:19.581 [2024-07-24 19:06:57.142083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.142111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.142270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.142295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.142473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.142497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.142650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.142674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.142822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.142846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.142971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.142996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.143130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.143154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.143337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.143362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.143539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.143566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.143737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.143761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.143894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.143918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.144064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.144088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.144240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.144264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.144417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.144441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.144602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.144629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.144808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.144832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.144983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.145011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.145138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.145170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.145289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.145313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.145428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.145452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.145571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.145596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.145752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.145775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.145900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.145924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.146052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.582 [2024-07-24 19:06:57.146091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.582 qpair failed and we were unable to recover it. 00:24:19.582 [2024-07-24 19:06:57.146279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.146304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.146482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.146506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.146682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.146705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.146834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.146859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.147009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.147032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.147189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.147213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.147391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.147416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.147595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.147620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.147750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.147792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.147938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.147965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.148155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.148181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.148351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.148375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.148549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.148573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.148701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.148728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.148882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.148907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.149035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.149060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.149247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.149273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.149426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.149450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.149596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.149620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.149773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.149802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.149918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.149943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.150090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.150121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.150249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.150276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.150397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.150422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.150571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.150596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.583 qpair failed and we were unable to recover it. 00:24:19.583 [2024-07-24 19:06:57.150729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.583 [2024-07-24 19:06:57.150754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.150884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.150909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.151032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.151057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.151252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.151277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.151427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.151452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.151640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.151664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.151796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.151822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.151994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.152019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.152153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.152178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.152304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.152329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.152454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.152494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.152634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.152661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.152832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.152857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.152988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.153013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.153145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.153170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.153348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.153373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.153506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.153532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.153661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.153686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.153875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.153900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.154073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.154098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.154250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.154275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.154396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.154422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.154578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.154602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.154724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.154749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.154925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.154950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.584 qpair failed and we were unable to recover it. 00:24:19.584 [2024-07-24 19:06:57.155077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.584 [2024-07-24 19:06:57.155107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.155247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.155272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.155426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.155450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.155624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.155648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.155779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.155804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.155959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.155984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.156153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.156182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.156391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.156416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.156578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.156603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.156733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.156758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.156884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.156909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.157062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.157086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.157246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.157271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.157462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.157490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.157636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.157661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.157812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.157837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.157987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.158012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.158222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.158247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.158399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.158425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.158547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.158572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.158706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.158731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.158877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.158902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.159058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.159082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.159242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.159266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.159394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.585 [2024-07-24 19:06:57.159419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.585 qpair failed and we were unable to recover it. 00:24:19.585 [2024-07-24 19:06:57.159594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.586 [2024-07-24 19:06:57.159619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.586 qpair failed and we were unable to recover it. 00:24:19.586 [2024-07-24 19:06:57.159767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.586 [2024-07-24 19:06:57.159792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.586 qpair failed and we were unable to recover it. 00:24:19.586 [2024-07-24 19:06:57.159918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.586 [2024-07-24 19:06:57.159942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.586 qpair failed and we were unable to recover it. 00:24:19.586 [2024-07-24 19:06:57.160120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.586 [2024-07-24 19:06:57.160152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.586 qpair failed and we were unable to recover it. 00:24:19.586 [2024-07-24 19:06:57.160292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.586 [2024-07-24 19:06:57.160318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.586 qpair failed and we were unable to recover it. 00:24:19.586 [2024-07-24 19:06:57.160455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.586 [2024-07-24 19:06:57.160480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.586 qpair failed and we were unable to recover it. 00:24:19.586 [2024-07-24 19:06:57.160652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.586 [2024-07-24 19:06:57.160676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.586 qpair failed and we were unable to recover it. 00:24:19.586 [2024-07-24 19:06:57.160838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.586 [2024-07-24 19:06:57.160862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.586 qpair failed and we were unable to recover it. 00:24:19.586 [2024-07-24 19:06:57.161011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.586 [2024-07-24 19:06:57.161035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.586 qpair failed and we were unable to recover it. 00:24:19.586 [2024-07-24 19:06:57.161193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.586 [2024-07-24 19:06:57.161218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.586 qpair failed and we were unable to recover it. 00:24:19.586 [2024-07-24 19:06:57.161373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.586 [2024-07-24 19:06:57.161397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.586 qpair failed and we were unable to recover it. 00:24:19.586 [2024-07-24 19:06:57.161525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.586 [2024-07-24 19:06:57.161549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.586 qpair failed and we were unable to recover it. 00:24:19.586 [2024-07-24 19:06:57.161695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.586 [2024-07-24 19:06:57.161726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.586 qpair failed and we were unable to recover it. 00:24:19.586 [2024-07-24 19:06:57.161904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.586 [2024-07-24 19:06:57.161929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.586 qpair failed and we were unable to recover it. 00:24:19.586 [2024-07-24 19:06:57.162056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.586 [2024-07-24 19:06:57.162080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.586 qpair failed and we were unable to recover it. 00:24:19.878 [2024-07-24 19:06:57.162253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.878 [2024-07-24 19:06:57.162279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.878 qpair failed and we were unable to recover it. 00:24:19.878 [2024-07-24 19:06:57.162433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.162457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.162573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.162597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.162748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.162773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.162902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.162927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.163052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.163076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.163236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.163261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.163412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.163437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.163610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.163635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.163824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.163848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.164002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.164027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.164154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.164179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.164317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.164341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.164532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.164556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.164709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.164734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.164923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.164947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.165098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.165127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.165256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.165282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.165430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.165455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.165581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.165605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.165760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.165787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.165917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.165941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.166089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.166117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.166285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.166310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.166462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.166492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.166672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.166697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.166815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.166840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.166989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.167014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.167151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.167176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.167368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.167396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.167600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.167624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.167754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.167779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.167903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.167928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.168083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.168115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.168311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.168336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.168513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.879 [2024-07-24 19:06:57.168541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.879 qpair failed and we were unable to recover it. 00:24:19.879 [2024-07-24 19:06:57.168682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.168707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.168851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.168875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.169010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.169035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.169185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.169210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.169340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.169364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.169516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.169541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.169665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.169689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.169817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.169841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.169990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.170015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.170216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.170241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.170391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.170415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.170564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.170590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.170765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.170792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.170968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.170993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.171122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.171147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.171279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.171310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.171441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.171467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.171625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.171650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.171777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.171801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.171953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.171978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.172125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.172156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.172284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.172308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.172432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.172457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.172609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.172648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.172841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.172868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.173050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.173075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.173214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.173240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.173390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.173414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.173605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.173629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.173761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.173786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.173939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.173964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.174135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.174161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.174312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.174337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.174496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.174539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.174713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.880 [2024-07-24 19:06:57.174737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.880 qpair failed and we were unable to recover it. 00:24:19.880 [2024-07-24 19:06:57.174891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.174915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.175071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.175096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.175243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.175268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.175430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.175457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.175622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.175649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.175821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.175845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.175999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.176024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.176191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.176216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.176348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.176373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.176526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.176551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.176701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.176726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.176900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.176925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.177047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.177072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.177231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.177255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.177397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.177422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.177579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.177604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.177733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.177758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.177923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.177951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.178154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.178181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.178333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.178357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.178510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.178534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.178682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.178711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.178892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.178917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.179037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.179062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.179216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.179257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.179402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.179429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.179603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.179627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.179751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.179775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.179928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.179952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.180111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.180136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.180313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.180337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.180502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.180529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.180699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.180724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.180906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.180930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.181061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.181085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.881 qpair failed and we were unable to recover it. 00:24:19.881 [2024-07-24 19:06:57.181255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.881 [2024-07-24 19:06:57.181281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.181443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.181485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.181642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.181668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.181837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.181861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.182034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.182062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.182293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.182319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.182476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.182501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.182654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.182679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.182885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.182910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.183061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.183087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.183234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.183258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.183389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.183414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.183587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.183611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.183816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.183845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.184008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.184032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.184216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.184242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.184396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.184421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.184553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.184578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.184699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.184724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.184891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.184918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.185051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.185077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.185270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.185295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.185434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.185459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.185594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.185618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.185743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.185768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.185936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.185963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.186138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.186173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.186353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.186377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.186499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.186524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.186672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.186696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.186824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.186848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.186995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.187020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.187199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.187233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.187362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.187387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.187515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.187540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.882 qpair failed and we were unable to recover it. 00:24:19.882 [2024-07-24 19:06:57.187711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.882 [2024-07-24 19:06:57.187738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.187938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.187963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.188114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.188139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.188296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.188321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.188470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.188494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.188638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.188666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.188817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.188842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.188975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.188999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.189178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.189204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.189328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.189354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.189480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.189505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.189655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.189697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.189850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.189875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.190030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.190054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.190219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.190244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.190376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.190416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.190593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.190618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.190765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.190792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.190942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.190969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.191118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.191144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.191296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.191320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.191479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.191523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.191732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.191757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.191904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.191933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.192066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.192094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.192303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.192328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.192478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.192519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.192655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.192683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.192859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.192884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.193018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.883 [2024-07-24 19:06:57.193043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.883 qpair failed and we were unable to recover it. 00:24:19.883 [2024-07-24 19:06:57.193181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.193208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.193355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.193380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.193546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.193574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.193764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.193789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.193914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.193940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.194088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.194119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.194292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.194318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.194470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.194495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.194649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.194674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.194851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.194876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.195020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.195045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.195199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.195224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.195378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.195403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.195530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.195555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.195690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.195714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.195862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.195887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.196035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.196060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.196188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.196229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.196427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.196452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.196595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.196620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.196769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.196811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.197003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.197028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.197175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.197202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.197352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.197377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.197536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.197561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.197683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.197708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.197884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.197909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.198061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.198115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.198314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.198339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.198483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.198508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.198664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.198689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.198840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.198865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.199017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.199042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.199174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.199200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.199365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.199391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.199566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.199591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.199714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.199741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.199925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.884 [2024-07-24 19:06:57.199950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.884 qpair failed and we were unable to recover it. 00:24:19.884 [2024-07-24 19:06:57.200144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.200174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.200343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.200371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.200546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.200570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.200723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.200748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.200899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.200924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.201080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.201116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.201280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.201306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.201456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.201497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.201668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.201693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.201870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.201895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.202043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.202068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.202206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.202231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.202361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.202386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.202576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.202601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.202727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.202751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.202904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.202929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.203069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.203094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.203236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.203261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.203411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.203436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.203618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.203644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.203758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.203783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.203937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.203961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.204136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.204161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.204305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.204330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.204486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.204510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.204659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.204683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.204810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.204835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.204998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.205022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.205149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.205173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.205315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.205340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.205491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.205516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.205644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.205669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.205790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.205819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.205993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.206018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.206179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.206204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.206360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.206385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.206532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.206557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.206750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.885 [2024-07-24 19:06:57.206775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.885 qpair failed and we were unable to recover it. 00:24:19.885 [2024-07-24 19:06:57.206903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.206928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.207082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.207186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.207326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.207351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.207532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.207557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.207732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.207758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.207957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.207981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.208111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.208148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.208287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.208311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.208437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.208461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.208610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.208634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.208761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.208786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.208949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.208973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.209120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.209153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.209308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.209332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.209510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.209534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.209661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.209686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.209857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.209882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.210070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.210097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.210260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.210284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.210413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.210440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.210597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.210621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.210847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.210871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.211022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.211048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.211225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.211250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.211406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.211430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.211582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.211607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.211784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.211810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.211982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.212008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.212166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.212191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.212347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.212372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.212533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.212558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.212713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.212738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.212895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.212919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.213068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.213093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.213285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.213310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.213449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.213489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.213659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.213683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.886 [2024-07-24 19:06:57.213853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.886 [2024-07-24 19:06:57.213878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.886 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.214028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.214069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.214288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.214313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.214468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.214493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.214652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.214691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.214836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.214862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.215010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.215034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.215166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.215191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.215359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.215383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.215555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.215580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.215758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.215782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.215905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.215929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.216082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.216113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.216275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.216299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.216443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.216467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.216615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.216656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.216803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.216829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.217003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.217027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.217199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.217224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.217356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.217397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.217546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.217570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.217716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.217740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.217901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.217925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.218069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.218093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.218231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.218255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.218410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.218439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.218568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.218592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.218724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.218748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.218902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.218926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.219096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.219127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.219255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.219280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.219439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.219464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.219592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.219616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.219760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.219785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.219936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.219961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.220136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.220163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.220361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.220386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.220540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.220565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.220712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.887 [2024-07-24 19:06:57.220737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.887 qpair failed and we were unable to recover it. 00:24:19.887 [2024-07-24 19:06:57.220892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.220917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.221034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.221058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.221247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.221272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.221400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.221424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.221579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.221619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.221789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.221814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.221964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.221989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.222174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.222199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.222357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.222381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.222504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.222544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.222735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.222762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.222929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.222953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.223113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.223138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.223355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.223384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.223513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.223537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.223713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.223737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.223887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.223912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.224037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.224061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.224221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.224245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.224436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.224463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.224663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.224688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.224852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.224880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.225045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.225069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.225252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.225277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.225420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.225445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.225600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.225627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.225823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.225847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.226002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.226044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.226191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.226220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.226390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.226415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.226565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.226589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.226739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.226764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.888 [2024-07-24 19:06:57.226890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.888 [2024-07-24 19:06:57.226914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.888 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.227096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.227127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.227283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.227307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.227463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.227488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.227639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.227664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.227865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.227892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.228069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.228093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.228228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.228252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.228404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.228449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.228619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.228644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.228766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.228790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.228919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.228942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.229095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.229125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.229273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.229297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.229450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.229477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.229622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.229647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.229801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.229824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.229970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.229996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.230148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.230174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.230383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.230410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.230610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.230634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.230765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.230790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.230919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.230944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.231116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.231144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.231293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.231317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.231505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.231530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.231685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.231709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.231853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.231877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.232008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.232032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.232157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.232182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.232331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.232355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.232481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.232506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.232649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.232673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.232806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.232830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.233009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.233033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.233211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.233238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.233386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.233410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.233538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.233562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.233717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.889 [2024-07-24 19:06:57.233741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-07-24 19:06:57.233867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.233890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.234016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.234041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.234170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.234195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.234352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.234377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.234550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.234577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.234744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.234780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.234955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.234979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.235113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.235148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.235334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.235361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.235559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.235583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.235757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.235784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.235977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.236004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.236151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.236176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.236371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.236397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.236571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.236595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.236772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.236797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.236920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.236944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.237072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.237098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.237227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.237252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.237381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.237405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.237534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.237558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.237710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.237735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.238799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.238836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.239012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.239041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.239207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.239234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.239357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.239381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.239586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.239614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.239756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.239781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.239915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.239939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.240060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.240084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.240241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.240266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.240397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.240422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.240549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.240575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.240728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.240752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.240897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.240924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.241094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.241127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.241283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.241308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.241437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.890 [2024-07-24 19:06:57.241466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.890 qpair failed and we were unable to recover it. 00:24:19.890 [2024-07-24 19:06:57.241618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.241644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.241771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.241797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.241931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.241972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.242129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.242158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.242336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.242360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.242501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.242529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.242676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.242703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.242872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.242897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.243068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.243096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.243277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.243305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.243506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.243531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.243701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.243729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.243903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.243927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.244077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.244111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.244261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.244286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.244440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.244465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.244611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.244636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.244792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.244817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.244967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.245007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.245205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.245231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.245382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.245407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.245531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.245556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.245688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.245712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.245843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.245867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.246059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.246087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.246290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.246315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.246491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.246525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.246668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.246695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.246865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.246892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.247070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.247098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.247247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.247274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.247424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.247449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.247598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.247623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.247769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.247796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.247969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.247993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.248125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.248150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.248282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.248307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.248481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.248506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.891 [2024-07-24 19:06:57.248658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.891 [2024-07-24 19:06:57.248682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.891 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.248832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.248857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.249009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.249034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.249207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.249232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.249356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.249382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.249511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.249536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.249668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.249692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.249878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.249904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.250078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.250122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.250254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.250278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.250434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.250459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.250591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.250617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.250795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.250819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.250945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.250970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.251171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.251197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.251348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.251375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.251554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.251579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.251725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.251750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.251903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.251927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.252088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.252119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.252251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.252276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.252401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.252425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.252602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.252643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.252815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.252839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.252970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.252994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.253171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.253200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.253358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.253382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.253556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.253581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.253733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.253773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.253948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.253974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.254147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.254176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.254305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.254333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.254497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.254521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.254670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.254711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.254869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.254895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.255044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.255069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.255225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.255251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.892 [2024-07-24 19:06:57.255404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.892 [2024-07-24 19:06:57.255445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.892 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.255620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.255645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.255837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.255865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.256034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.256061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.256221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.256247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.256389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.256413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.256596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.256620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.256749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.256774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.256894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.256918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.257118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.257146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.257345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.257370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.257549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.257574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.257704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.257728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.257878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.257904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.258037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.258061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.258214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.258256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.258429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.258454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.258628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.258655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.258802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.258830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.259028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.259056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.259190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.259215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.259371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.259395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.259550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.259575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.259725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.259750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.259904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.259928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.260151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.260176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.260325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.260350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.260501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.260526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.260646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.260672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.260849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.260892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.261083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.261119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.261273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.261300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.261467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.261494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.893 qpair failed and we were unable to recover it. 00:24:19.893 [2024-07-24 19:06:57.261669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.893 [2024-07-24 19:06:57.261696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.261873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.261898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.262049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.262090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.262250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.262277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.262451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.262475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.262653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.262679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.262811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.262836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.262977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.263002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.263137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.263164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.263337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.263363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.263515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.263539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.263663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.263703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.263892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.263920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.264090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.264126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.264320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.264347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.264511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.264539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.264692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.264718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.264874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.264899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.265049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.265074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.265260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.265285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.265484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.265512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.265699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.265726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.265931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.265956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.266166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.266191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.266319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.266345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.266502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.266526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.266703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.266728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.266923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.266948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.267123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.267148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.267299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.267324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.267477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.267503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.267619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.267643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.267802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.267828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.268009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.268037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.268206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.268232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.268360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.268385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.268562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.268587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.268742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.268768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.894 [2024-07-24 19:06:57.268922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.894 [2024-07-24 19:06:57.268946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.894 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.269099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.269131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.269261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.269290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.269466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.269509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.269676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.269700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.269852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.269877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.270001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.270043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.270227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.270253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.270402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.270426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.270578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.270603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.270750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.270791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.270964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.270988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.271147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.271173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.271324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.271349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.271473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.271498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.271650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.271690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.271863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.271891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.272069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.272094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.272250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.272275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.272471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.272497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.272645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.272670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.272845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.272872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.273064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.273091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.273257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.273282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.273409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.273434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.273607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.273632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.273783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.273808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.273974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.273999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.274130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.274156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.274335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.274360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.274523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.274548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.274678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.274702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.274880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.274906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.275057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.275085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.275264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.275292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.275470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.275495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.275652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.275677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.275853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.275878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.276036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.895 [2024-07-24 19:06:57.276061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.895 qpair failed and we were unable to recover it. 00:24:19.895 [2024-07-24 19:06:57.276193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.276218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.276364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.276405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.276576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.276600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.276722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.276746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.276878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.276904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.277021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.277045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.277243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.277272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.277434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.277462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.277634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.277660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.277826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.277852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.278029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.278053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.278201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.278227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.278378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.278403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.278560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.278584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.278775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.278801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.278955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.278980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.279111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.279136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.279288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.279314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.279473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.279498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.279653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.279677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.279851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.279875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.280003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.280028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.280230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.280258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.280422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.280446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.280646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.280673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.280832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.280860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.281003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.281028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.281228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.281256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.281397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.281424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.281591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.281616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.281763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.281788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.281917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.281946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.282119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.282145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.282348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.282375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.282539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.282566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.282736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.282760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.282932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.282959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.283159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.896 [2024-07-24 19:06:57.283184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.896 qpair failed and we were unable to recover it. 00:24:19.896 [2024-07-24 19:06:57.283325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.283349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.283504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.283546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.283712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.283741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.283922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.283947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.284122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.284165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.284353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.284377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.284536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.284560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.284687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.284713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.284839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.284863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.285047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.285072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.285256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.285281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.285438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.285463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.285604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.285627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.285755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.285779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.285928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.285953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.286125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.286153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.286298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.286325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.286491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.286519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.286722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.286748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.286880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.286904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.287036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.287065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.287235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.287264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.287418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.287444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.287641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.287669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.287829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.287857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.288015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.288043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.288212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.288238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.288369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.288393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.288539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.288566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.288732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.288759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.288946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.288970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.289139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.289168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.289366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.897 [2024-07-24 19:06:57.289391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.897 qpair failed and we were unable to recover it. 00:24:19.897 [2024-07-24 19:06:57.289511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.289536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.289690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.289715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.289849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.289890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.290077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.290122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.290268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.290295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.290419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.290444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.290623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.290664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.290870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.290896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.291092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.291128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.291313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.291338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.291514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.291560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.291701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.291728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.291888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.291915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.292061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.292087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.292260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.292285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.292496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.292520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.292667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.292691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.292845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.292870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.293016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.293056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.293267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.293291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.293435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.293463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.293611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.293636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.293768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.293792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.293953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.293979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.294135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.294162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.294312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.294336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.294508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.294535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.294705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.294730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.294888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.294929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.295112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.295138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.295315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.295343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.295514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.295541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.295719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.295744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.295899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.295924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.296097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.296127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.296280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.296306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.296458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.296483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.296661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.296685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.898 qpair failed and we were unable to recover it. 00:24:19.898 [2024-07-24 19:06:57.296872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.898 [2024-07-24 19:06:57.296917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.297083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.297117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.297278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.297304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.297438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.297463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.297600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.297624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.297775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.297800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.297933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.297958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.298137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.298162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.298332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.298360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.298530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.298555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.298707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.298750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.298929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.298954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.299114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.299139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.299312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.299355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.299521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.299549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.299715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.299739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.299925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.299953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.300124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.300163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.300342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.300369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.300517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.300541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.300692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.300716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.300894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.300921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.301117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.301156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.301322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.301347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.301504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.301528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.301682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.301706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.301860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.301885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.302015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.302039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.302202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.302228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.302371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.302396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.302547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.302589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.302763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.302787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.302915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.302940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.303110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.303163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.303324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.303351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.303537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.303581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.303781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.303828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.303996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.304024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.899 qpair failed and we were unable to recover it. 00:24:19.899 [2024-07-24 19:06:57.304175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.899 [2024-07-24 19:06:57.304200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.304353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.304377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.304546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.304574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.304735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.304762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.304920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.304947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.305115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.305140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.305337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.305369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.305536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.305564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.305756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.305783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.305963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.305987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.306148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.306190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.306328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.306356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.306544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.306572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.306757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.306782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.306969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.306994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.307155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.307197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.307373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.307398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.307552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.307577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.307748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.307775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.307922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.307950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.308116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.308144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.308350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.308374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.308515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.308542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.308709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.308737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.308903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.308932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.309075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.309100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.309315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.309341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.309488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.309515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.309706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.309733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.309904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.309928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.310081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.310118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.310248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.310289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.310469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.310494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.310648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.310675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.310878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.310926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.311155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.311180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.311331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.311373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.311523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.311548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.900 [2024-07-24 19:06:57.311678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.900 [2024-07-24 19:06:57.311717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.900 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.311883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.311910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.312078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.312112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.312266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.312292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.312412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.312436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.312588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.312613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.312793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.312821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.313019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.313044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.313273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.313300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.313507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.313531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.313710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.313734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.313885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.313910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.314082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.314118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.314293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.314320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.314487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.314515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.314698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.314723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.314873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.314901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.315037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.315063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.315217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.315244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.315447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.315472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.315646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.315673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.315837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.315864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.316014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.316038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.316228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.316255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.316403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.316431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.316598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.316625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.316755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.316783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.316962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.316987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.317138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.317164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.317292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.317317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.317488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.317516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.317688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.317713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.317888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.317915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.318080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.318126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.318302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.318330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.318513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.318538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.318674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.318699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.318892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.318919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.901 [2024-07-24 19:06:57.319057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.901 [2024-07-24 19:06:57.319084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.901 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.319261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.319287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.319436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.319479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.319642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.319670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.319812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.319841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.319983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.320008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.320151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.320177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.320308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.320333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.320513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.320541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.320731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.320756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.320907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.320949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.321087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.321122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.321323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.321349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.321505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.321530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.321684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.321709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.321871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.321912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.322080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.322126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.322330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.322355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.322534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.322561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.322723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.322751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.322942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.322967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.323092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.323125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.323264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.323289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.323437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.323462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.323593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.323618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.323769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.323798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.323974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.324002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.324173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.324199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.324345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.324372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.324557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.324582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.324738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.324763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.324966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.324995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.902 qpair failed and we were unable to recover it. 00:24:19.902 [2024-07-24 19:06:57.325158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.902 [2024-07-24 19:06:57.325186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.325355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.325380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.325503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.325547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.325719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.325744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.325897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.325922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.326112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.326137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.326282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.326309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.326462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.326491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.326681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.326708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.326880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.326905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.327094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.327128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.327315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.327341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.327814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.327845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.328047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.328073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.328221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.328246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.328422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.328466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.328651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.328676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.328829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.328853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.329029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.329057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.329268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.329294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.329424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.329469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.329626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.329651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.329816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.329843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.329983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.330011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.330209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.330235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.330387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.330411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.330578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.330604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.330747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.330776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.330967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.330992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.331149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.331174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.331358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.331385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.331572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.331599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.331741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.331768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.331968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.331993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.332150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.332177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.332341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.332368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.332569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.332594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.332744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.332768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.903 [2024-07-24 19:06:57.332934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.903 [2024-07-24 19:06:57.332961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.903 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.333146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.333170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.333292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.333316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.333515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.333542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.333707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.333734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.333899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.333926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.334094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.334150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.334348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.334373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.334516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.334544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.334687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.334715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.334882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.334909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.335065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.335090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.335259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.335284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.335489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.335517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.335692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.335719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.335871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.335895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.336042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.336067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.336211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.336236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.336368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.336412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.336561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.336586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.336764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.336806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.336938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.336964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.337097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.337134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.337308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.337334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.337490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.337514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.337645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.337669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.337790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.337815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.337939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.337963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.338089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.338149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.338298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.338326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.338462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.338489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.338668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.338693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.338841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.338883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.339050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.339076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.339237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.339261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.339391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.339416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.339543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.339568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.339740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.339767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.339904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.904 [2024-07-24 19:06:57.339932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.904 qpair failed and we were unable to recover it. 00:24:19.904 [2024-07-24 19:06:57.340097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.340130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.340276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.340303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.340471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.340496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.340649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.340674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.340833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.340857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.340985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.341009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.341163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.341204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.341344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.341372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.341542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.341567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.341718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.341744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.341907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.341935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.342075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.342114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.342297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.342322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.342473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.342497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.342623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.342648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.342766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.342789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.342917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.342942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.343094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.343126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.343293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.343321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.343492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.343519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.343693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.343717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.343830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.343869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.344012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.344038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.344224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.344249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.344384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.344410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.344564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.344589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.344777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.344801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.344950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.344975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.345095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.345129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.345330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.345357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.345492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.345521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.345689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.345717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.345880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.345905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.346028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.346053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.346233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.346258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.346380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.346405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.346525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.346549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.346702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.905 [2024-07-24 19:06:57.346728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.905 qpair failed and we were unable to recover it. 00:24:19.905 [2024-07-24 19:06:57.346856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.346884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.347037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.347062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.347231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.347256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.347384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.347425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.347561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.347589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.347763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.347789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.347937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.347962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.348120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.348163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.348324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.348352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.348514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.348542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.348724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.348749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.348901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.348926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.349133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.349161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.349323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.349351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.349529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.349554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.349689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.349716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.349897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.349921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.350055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.350080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.350280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.350305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.350429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.350453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.350573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.350598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.350722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.350746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.350872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.350895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.351018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.351042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.351250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.351274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.351395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.351419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.351572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.351597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.351723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.351768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.351936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.351962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.352159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.352188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.352347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.352373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.352573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.352600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.352747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.352774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.352943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.352971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.353112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.353138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.353272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.353296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.353473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.353500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.353629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.906 [2024-07-24 19:06:57.353657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.906 qpair failed and we were unable to recover it. 00:24:19.906 [2024-07-24 19:06:57.353856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.353880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.354047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.354074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.354254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.354280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.354413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.354439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.354625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.354649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.354780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.354805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.354975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.355000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.355128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.355153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.355327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.355351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.355520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.355548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.355736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.355761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.355937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.355961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.356079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.356111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.356267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.356291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.356429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.356469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.356610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.356637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.356807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.356832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.357035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.357061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.357220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.357245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.357396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.357421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.357569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.357593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.357723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.357765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.357909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.357936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.358099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.358136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.358284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.358308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.358460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.358484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.358669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.358697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.358886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.358913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.359053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.359077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.359216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.359242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.359377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.359402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.359549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.359575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.359750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.907 [2024-07-24 19:06:57.359775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.907 qpair failed and we were unable to recover it. 00:24:19.907 [2024-07-24 19:06:57.359939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.359967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.360138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.360166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.360332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.360360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.360540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.360565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.360716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.360758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.360931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.360958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.361095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.361129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.361287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.361313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.361457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.361481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.361684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.361710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.361869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.361896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.362077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.362107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.362224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.362248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.362373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.362397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.362570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.362595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.362720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.362744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.362916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.362941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.363090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.363148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.363308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.363336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.363494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.363518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.363698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.363723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.363874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.363902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.364080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.364112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.364270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.364296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.364485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.364538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.364688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.364716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.364883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.364910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.365085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.365118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.365287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.365311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.365473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.365515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.365654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.365681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.365881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.365905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.366052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.366079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.366271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.366297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.366453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.366481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.366646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.366670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.366837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.366864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.367048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.908 [2024-07-24 19:06:57.367072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.908 qpair failed and we were unable to recover it. 00:24:19.908 [2024-07-24 19:06:57.367264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.367293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.367440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.367465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.367620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.367660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.367803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.367830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.367985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.368011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.368141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.368168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.368325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.368349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.368523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.368550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.368693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.368721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.368892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.368916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.369086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.369121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.369321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.369345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.369516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.369544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.369712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.369741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.369867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.369892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.370073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.370109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.370271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.370299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.370474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.370498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.370647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.370671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.370823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.370848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.371064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.371089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.371255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.371280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.371472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.371523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.371701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.371725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.371872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.371912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.372075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.372099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.372229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.372253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.372408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.372432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.372589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.372617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.372789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.372813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.372940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.372982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.373159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.373187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.373321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.373349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.373516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.373540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.373723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.373774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.373942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.373971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.374146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.374174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.374354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.374379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.909 qpair failed and we were unable to recover it. 00:24:19.909 [2024-07-24 19:06:57.374508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.909 [2024-07-24 19:06:57.374552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.374697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.374726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.374874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.374902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.375084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.375115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.375297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.375324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.375460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.375487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.375668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.375693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.375820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.375844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.375993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.376036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.376225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.376253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.376382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.376409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.376585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.376609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.376764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.376806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.376955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.376983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.377148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.377177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.377344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.377369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.377522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.377547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.377673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.377699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.377888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.377915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.378063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.378088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.378248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.378273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.378454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.378481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.378650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.378677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.378826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.378851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.379045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.379072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.379242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.379269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.379397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.379440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.379613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.379638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.379764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.379791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.379944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.379973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.380124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.380153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.380300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.380325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.380446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.380472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.380596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.380621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.380817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.380844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.381008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.381032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.381187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.381212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.381360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.381385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.910 [2024-07-24 19:06:57.381509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.910 [2024-07-24 19:06:57.381534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.910 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.381688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.381712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.381875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.381902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.382059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.382084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.382222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.382247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.382464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.382493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.382667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.382695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.382897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.382921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.383051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.383076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.383209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.383235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.383388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.383412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.383530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.383554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.383763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.383791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.383966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.383990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.384127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.384153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.384329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.384354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.384504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.384532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.384704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.384729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.384877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.384918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.385096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.385131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.385307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.385350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.385490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.385514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.385665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.385708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.385844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.385871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.386007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.386035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.386180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.386206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.386356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.386381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.386559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.386587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.386724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.386752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.386937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.386962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.387188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.387216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.387392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.387417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.387549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.387577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.911 qpair failed and we were unable to recover it. 00:24:19.911 [2024-07-24 19:06:57.387740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.911 [2024-07-24 19:06:57.387765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.387915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.387941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.388072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.388097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.388288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.388313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.388508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.388532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.388663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.388688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.388809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.388834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.389039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.389063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.389208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.389233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.389356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.389398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.389562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.389589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.389733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.389759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.389940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.389964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.390098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.390137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.390329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.390357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.390533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.390557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.390687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.390712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.390843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.390868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.391024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.391063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.391210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.391238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.391386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.391411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.391584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.391609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.391742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.391766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.391952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.391979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.392129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.392156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.392319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.392343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.392468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.392496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.392662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.392689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.392840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.392864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.393018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.393043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.393209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.393235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.393393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.393421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.393594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.393618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.393743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.393766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.393922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.393949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.394110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.394135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.394336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.394360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.394536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.394584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.912 [2024-07-24 19:06:57.394724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.912 [2024-07-24 19:06:57.394751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.912 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.394941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.394966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.395129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.395154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.395291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.395316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.395444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.395469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.395645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.395670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.395822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.395845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.396022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.396049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.396202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.396228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.396375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.396416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.396594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.396619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.396792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.396820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.396992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.397020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.397168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.397193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.397321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.397345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.397537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.397564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.397705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.397732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.397907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.397933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.398080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.398111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.398240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.398264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.398420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.398443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.398588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.398615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.398762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.398787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.398921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.398945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.399092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.399139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.399299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.399324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.399447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.399471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.399601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.399626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.399824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.399849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.399993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.400035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.400216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.400243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.400390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.400430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.400594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.400622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.400803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.400827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.400955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.400980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.401106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.401132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.401318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.401345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.401547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.401572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.913 [2024-07-24 19:06:57.401707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.913 [2024-07-24 19:06:57.401733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.913 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.401873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.401914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.402082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.402114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.402289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.402315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.402471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.402495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.402679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.402704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.402879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.402906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.403085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.403122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.403256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.403280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.403438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.403461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.403648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.403675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.403839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.403866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.404049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.404073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.404207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.404232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.404353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.404377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.404507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.404532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.404655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.404679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.404867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.404894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.405035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.405067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.405226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.405256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.405411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.405436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.405606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.405631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.405762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.405786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.405910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.405934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.406114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.406140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.406290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.406317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.406478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.406505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.406641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.406667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.406840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.406865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.406999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.407023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.407194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.407236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.407386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.407411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.407570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.407595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.407791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.407817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.408000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.408024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.408176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.408201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.408367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.408393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.408546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.408570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.408727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.914 [2024-07-24 19:06:57.408753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.914 qpair failed and we were unable to recover it. 00:24:19.914 [2024-07-24 19:06:57.408899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.408927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.409079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.409110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.409235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.409259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.409411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.409435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.409588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.409628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.409784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.409808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.409973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.410005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.410170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.410198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.410370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.410397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.410566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.410590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.410758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.410785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.410946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.410972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.411166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.411194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.411361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.411386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.411557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.411584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.411754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.411780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.411926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.411952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.412119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.412144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.412264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.412288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.412467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.412509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.412672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.412700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.412851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.412876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.413025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.413049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.413248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.413273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.413424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.413451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.413626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.413651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.413801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.413826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.414019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.414043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.414175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.414200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.414330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.414355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.414512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.414538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.414701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.414729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.414922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.414949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.415150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.415176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.415312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.415336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.415466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.415491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.415618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.415642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.415775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.915 [2024-07-24 19:06:57.415799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.915 qpair failed and we were unable to recover it. 00:24:19.915 [2024-07-24 19:06:57.415928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.415951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.416160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.416198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.416360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.416384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.416512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.416537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.416685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.416710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.416877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.416903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.417046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.417074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.417234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.417261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.417418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.417442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.417572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.417596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.417750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.417775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.417930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.417954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.418079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.418110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.418311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.418338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.418482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.418508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.418658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.418682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.418858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.418883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.419094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.419127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.419302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.419327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.419521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.419545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.419700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.419741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.419879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.419906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.420060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.420084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.420235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.420260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.420412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.420436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.420559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.420583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.420706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.420730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.420901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.420924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.421095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.421131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.421296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.421324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.421490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.421517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.421709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.421733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.916 [2024-07-24 19:06:57.421929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.916 [2024-07-24 19:06:57.421955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.916 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.422096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.422132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.422273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.422300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.422480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.422504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.422626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.422655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.422810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.422834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.422989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.423017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.423212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.423242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.423386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.423413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.423546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.423584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.423724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.423750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.423922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.423946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.424096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.424127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.424267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.424291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.424439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.424466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.424617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.424641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.424771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.424796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.424947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.424974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.425133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.425169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.425371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.425395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.425543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.425567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.425714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.425753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.425916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.425945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.426098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.426131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.426261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.426301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.426439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.426466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.426634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.426662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.426851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.426875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.427001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.427025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.427172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.427197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.427375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.427402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.427575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.427604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.427797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.427824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.427969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.427996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.428136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.428164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.428306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.428331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.428454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.428479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.428632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.428658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.917 [2024-07-24 19:06:57.428830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.917 [2024-07-24 19:06:57.428854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.917 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.429000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.429025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.429177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.429205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.429371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.429398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.429555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.429579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.429731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.429756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.429927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.429953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.430125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.430153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.430312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.430339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.430487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.430511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.430641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.430665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.430786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.430809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.430942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.430967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.431119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.431144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.431292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.431317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.431467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.431506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.431704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.431730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.431909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.431934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.432110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.432138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.432297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.432325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.432471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.432502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.432658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.432682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.432799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.432823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.432953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.432978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.433169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.433195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.433321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.433345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.433475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.433499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.433692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.433717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.433872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.433913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.434065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.434089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.434250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.434290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.434421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.434449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.434596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.434624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.434800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.434824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.434980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.435026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.435182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.435211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.435375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.435402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.435577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.435601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.918 qpair failed and we were unable to recover it. 00:24:19.918 [2024-07-24 19:06:57.435724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.918 [2024-07-24 19:06:57.435748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.435946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.435970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.436125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.436149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.436278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.436303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.436432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.436456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.436609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.436633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.436808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.436849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.436991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.437016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.437173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.437216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.437354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.437382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.437544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.437569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.437723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.437748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.437869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.437892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.438070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.438097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.438270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.438298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.438500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.438524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.438695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.438722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.438887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.438915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.439063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.439088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.439235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.439259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.439411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.439451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.439626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.439651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.439793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.439818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.439953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.439977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.440132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.440175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.440365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.440392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.440562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.440590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.440756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.440780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.440945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.440973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.441121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.441149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.441294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.441320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.441466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.441491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.441603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.441627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.441834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.441860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.442004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.442031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.442282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.442308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.442508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.442536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.442682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.442710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.442839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.442865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.443005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.443030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.443187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.919 [2024-07-24 19:06:57.443228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.919 qpair failed and we were unable to recover it. 00:24:19.919 [2024-07-24 19:06:57.443370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.920 [2024-07-24 19:06:57.443396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.920 qpair failed and we were unable to recover it. 00:24:19.920 [2024-07-24 19:06:57.443569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.920 [2024-07-24 19:06:57.443594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.920 qpair failed and we were unable to recover it. 00:24:19.920 [2024-07-24 19:06:57.443750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.920 [2024-07-24 19:06:57.443775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.920 qpair failed and we were unable to recover it. 00:24:19.920 [2024-07-24 19:06:57.443915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.920 [2024-07-24 19:06:57.443943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.920 qpair failed and we were unable to recover it. 00:24:19.920 [2024-07-24 19:06:57.444116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.920 [2024-07-24 19:06:57.444143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.920 qpair failed and we were unable to recover it. 00:24:19.920 [2024-07-24 19:06:57.444283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.920 [2024-07-24 19:06:57.444309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.920 qpair failed and we were unable to recover it. 00:24:19.920 [2024-07-24 19:06:57.444457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.920 [2024-07-24 19:06:57.444483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.920 qpair failed and we were unable to recover it. 00:24:19.920 [2024-07-24 19:06:57.444681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.920 [2024-07-24 19:06:57.444708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.920 qpair failed and we were unable to recover it. 00:24:19.920 [2024-07-24 19:06:57.444850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.920 [2024-07-24 19:06:57.444876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.920 qpair failed and we were unable to recover it. 00:24:19.920 [2024-07-24 19:06:57.445011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.920 [2024-07-24 19:06:57.445043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.920 qpair failed and we were unable to recover it. 00:24:19.920 [2024-07-24 19:06:57.445220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.920 [2024-07-24 19:06:57.445246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.920 qpair failed and we were unable to recover it. 00:24:19.920 [2024-07-24 19:06:57.445378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.920 [2024-07-24 19:06:57.445403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.920 qpair failed and we were unable to recover it. 00:24:19.920 [2024-07-24 19:06:57.445580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.920 [2024-07-24 19:06:57.445624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.920 qpair failed and we were unable to recover it. 00:24:19.920 [2024-07-24 19:06:57.445764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.920 [2024-07-24 19:06:57.445792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.920 qpair failed and we were unable to recover it. 00:24:19.920 [2024-07-24 19:06:57.445938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.920 [2024-07-24 19:06:57.445962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.920 qpair failed and we were unable to recover it. 00:24:19.920 [2024-07-24 19:06:57.446112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.920 [2024-07-24 19:06:57.446137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.920 qpair failed and we were unable to recover it. 00:24:19.920 [2024-07-24 19:06:57.446274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.920 [2024-07-24 19:06:57.446301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.920 qpair failed and we were unable to recover it. 00:24:19.920 [2024-07-24 19:06:57.446500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.920 [2024-07-24 19:06:57.446524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.920 qpair failed and we were unable to recover it. 00:24:19.920 [2024-07-24 19:06:57.446673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.920 [2024-07-24 19:06:57.446697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.920 qpair failed and we were unable to recover it. 00:24:19.920 [2024-07-24 19:06:57.446820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.920 [2024-07-24 19:06:57.446864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.920 qpair failed and we were unable to recover it. 00:24:19.920 [2024-07-24 19:06:57.447009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.920 [2024-07-24 19:06:57.447036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.920 qpair failed and we were unable to recover it. 00:24:19.920 [2024-07-24 19:06:57.447176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.920 [2024-07-24 19:06:57.447204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:19.920 qpair failed and we were unable to recover it. 00:24:20.202 [2024-07-24 19:06:57.447375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.202 [2024-07-24 19:06:57.447399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.202 qpair failed and we were unable to recover it. 00:24:20.202 [2024-07-24 19:06:57.447556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.202 [2024-07-24 19:06:57.447581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.202 qpair failed and we were unable to recover it. 00:24:20.202 [2024-07-24 19:06:57.447706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.202 [2024-07-24 19:06:57.447731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.202 qpair failed and we were unable to recover it. 00:24:20.202 [2024-07-24 19:06:57.447910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.202 [2024-07-24 19:06:57.447939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.202 qpair failed and we were unable to recover it. 00:24:20.202 [2024-07-24 19:06:57.448109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.202 [2024-07-24 19:06:57.448134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.202 qpair failed and we were unable to recover it. 00:24:20.202 [2024-07-24 19:06:57.448257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.202 [2024-07-24 19:06:57.448282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.202 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.448406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.448430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.448624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.448649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.448804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.448828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.448999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.449026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.449174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.449202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.449367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.449392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.449518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.449542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.449691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.449733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.449885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.449914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.450043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.450068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.450201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.450225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.450372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.450397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.450521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.450544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.450716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.450743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.450912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.450937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.451131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.451160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.451337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.451362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.451489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.451514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.451653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.451677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.451802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.451842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.452024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.452049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.452170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.452195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.452342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.452367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.452524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.452566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.452745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.452770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.452912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.452937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.453088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.453121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.453270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.453297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.453432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.453457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.453635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.453659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.453812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.453837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.453958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.453983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.454115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.454140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.454306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.454332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.454499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.454524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.454702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.454728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.454895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.203 [2024-07-24 19:06:57.454922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.203 qpair failed and we were unable to recover it. 00:24:20.203 [2024-07-24 19:06:57.455056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.455081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.455238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.455262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.455411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.455455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.455584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.455609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.455791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.455816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.455963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.455987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.456136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.456161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.456324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.456347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.456525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.456550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.456708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.456732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.456877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.456904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.457073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.457098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.457268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.457309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.457461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.457486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.457608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.457648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.457835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.457861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.457993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.458019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.458220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.458245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.458398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.458422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.458569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.458593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.458723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.458748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.458899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.458923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.459077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.459123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.459289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.459315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.459451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.459477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.459641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.459665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.459824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.459848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.460042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.460069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.460264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.460304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.460440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.460468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.460626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.460653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.460823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.460852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.460996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.461027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.461207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.461235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.461371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.461417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.204 [2024-07-24 19:06:57.461657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.204 [2024-07-24 19:06:57.461684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.204 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.461846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.461874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.462043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.462069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.462245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.462273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.462420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.462449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.462588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.462616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.462781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.462807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.462970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.462998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.463169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.463197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.463384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.463412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.463597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.463623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.463756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.463784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.463942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.463970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.464146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.464172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.464323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.464349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.464474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.464500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.464627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.464655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.464837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.464869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.465018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.465043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.465197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.465222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.465369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.465394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.465522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.465562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.465731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.465755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.465918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.465943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.466077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.466112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.466279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.466304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.466451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.466476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.466624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.466664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.466845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.466871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.467023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.467051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.467212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.467238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.467372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.467398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.467521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.467548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.467699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.467725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.467906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.467932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.468118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.468145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.205 [2024-07-24 19:06:57.468327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.205 [2024-07-24 19:06:57.468354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.205 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.468511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.468538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.468695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.468721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.468888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.468915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.469076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.469108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.469238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.469265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.469407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.469433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.469601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.469628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.469830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.469861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.470013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.470039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.470192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.470217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.470346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.470373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.470498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.470524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.470673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.470698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.470847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.470873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.471031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.471057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.471214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.471241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.471422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.471448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.471597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.471623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.471771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.471797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.471919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.471945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.472097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.472130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.472294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.472320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.472440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.472465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.472625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.472650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.472807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.472832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.472978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.473002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.473155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.473181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.473323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.473348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.473499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.473523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.473660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.473685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.473862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.473886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.474008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.474032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.474168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.474194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.474339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.474365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.474518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.474545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.474675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.474700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.474820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.206 [2024-07-24 19:06:57.474845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.206 qpair failed and we were unable to recover it. 00:24:20.206 [2024-07-24 19:06:57.475022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.475046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.475183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.475209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.475362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.475387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.475571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.475595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.475744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.475768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.475922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.475947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.476066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.476090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.476253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.476277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.476401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.476427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.476546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.476569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.476693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.476718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.476896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.476920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.477076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.477100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.477234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.477259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.477390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.477415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.477600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.477624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.477742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.477767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.477941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.477966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.478097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.478129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.478284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.478310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.478427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.478452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.478604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.478629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.478782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.478807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.478958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.478983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.479112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.479142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.479271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.479296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.479450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.479475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.479598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.479623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.479775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.479800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.207 [2024-07-24 19:06:57.479948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.207 [2024-07-24 19:06:57.479976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.207 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.480189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.480215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.480364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.480389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.480514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.480538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.480661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.480687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.480875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.480899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.481039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.481063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.481214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.481239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.481365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.481390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.481546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.481571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.481690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.481716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.481868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.481893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.482024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.482049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.482175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.482200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.482399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.482424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.482555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.482580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.482699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.482724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.482881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.482905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.483072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.483099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.483281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.483306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.483430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.483455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.483577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.483602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.483756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.483784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.483937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.483961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.484123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.484149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.484269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.484294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.484422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.484448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.484583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.484607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.484762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.484787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.484906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.484930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.485076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.485123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.485285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.485312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.485477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.485504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.485695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.485721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.485894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.485919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.486073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.486099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.208 [2024-07-24 19:06:57.486237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.208 [2024-07-24 19:06:57.486263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.208 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.486407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.486433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.486584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.486628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.486800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.486825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.486983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.487009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.487168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.487194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.487341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.487367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.487524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.487550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.487723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.487753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.487955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.487981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.488177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.488207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.488382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.488410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.488586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.488612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.488733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.488765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.488945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.488974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.489147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.489174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.489302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.489329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.489479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.489521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.489720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.489746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.489865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.489891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.490042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.490068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.490239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.490268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.490447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.490473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.490608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.490633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.490763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.490789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.490998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.491022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.491144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.491170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.491391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.491416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.491549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.491573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.491730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.491754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.491908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.491932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.492084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.492116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.492257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.492285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.492490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.492515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.492689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.492714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.209 qpair failed and we were unable to recover it. 00:24:20.209 [2024-07-24 19:06:57.492894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.209 [2024-07-24 19:06:57.492918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.493068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.493093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.493228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.493253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.493376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.493401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.493578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.493603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.493793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.493840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.494020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.494048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.494248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.494274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.494404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.494429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.494643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.494698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.494896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.494921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.495065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.495090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.495254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.495279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.495420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.495444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.495574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.495599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.495783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.495808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.495954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.495979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.496145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.496173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.496376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.496400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.496558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.496582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.496758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.496783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.496960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.496985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.497180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.497209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.497367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.497394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.497563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.497588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.497721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.497745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.497902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.497927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.498083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.498115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.498251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.498276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.498425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.498448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.498634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.498662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.498804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.498833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.498980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.499009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.499131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.499157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.210 [2024-07-24 19:06:57.499281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.210 [2024-07-24 19:06:57.499306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.210 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.499458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.499482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.499608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.499631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.499779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.499803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.499948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.499975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.500141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.500170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.500321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.500346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.500498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.500522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.500678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.500705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.500875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.500903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.501079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.501108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.501281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.501307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.501454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.501481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.501727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.501777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.501946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.501971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.502122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.502165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.502318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.502342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.502493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.502517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.502738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.502763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.502947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.502972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.503124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.503149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.503274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.503313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.503505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.503530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.503674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.503699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.503900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.503928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.504098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.504131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.504261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.504286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.504436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.504479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.504681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.504708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.504849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.504875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.505030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.505054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.505180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.505205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.505380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.505407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.505609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.505636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.505785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.505810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.506001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.506027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.506170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.506197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.506340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.506367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.211 [2024-07-24 19:06:57.506545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.211 [2024-07-24 19:06:57.506570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.211 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.506745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.506769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.506973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.507000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.507196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.507225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.507407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.507432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.507607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.507634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.507815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.507841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.507997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.508038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.508215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.508240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.508371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.508395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.508546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.508570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.508769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.508823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.508992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.509017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.509168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.509209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.509387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.509411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.509569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.509609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.509780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.509805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.509962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.510002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.510171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.510198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.510391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.510417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.510559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.510585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.510737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.510778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.510970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.510997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.511140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.511168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.511339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.511364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.511520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.511544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.511687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.511712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.511868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.511909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.512086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.512117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.512283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.512310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.512477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.512504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.212 [2024-07-24 19:06:57.512669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.212 [2024-07-24 19:06:57.512697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.212 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.512844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.512868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.513039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.513077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.513222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.513247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.513369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.513393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.513608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.513632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.513812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.513858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.514044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.514071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.514276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.514301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.514457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.514482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.514608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.514649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.514823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.514851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.514988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.515016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.515212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.515238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.515409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.515434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.515585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.515626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.515786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.515814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.516012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.516037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.516181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.516209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.516356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.516381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.516535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.516560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.516721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.516745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.516927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.516954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.517163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.517189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.517365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.517397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.517546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.517570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.517747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.517771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.517927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.517955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.518123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.518151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.518289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.518313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.518460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.518485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.518615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.518640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.518789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.518813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.518970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.518996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.519145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.519174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.519314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.519341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.519502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.213 [2024-07-24 19:06:57.519529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.213 qpair failed and we were unable to recover it. 00:24:20.213 [2024-07-24 19:06:57.519702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.519727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.519863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.519909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.520107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.520144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.520295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.520322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.520492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.520516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.520668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.520692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.520813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.520838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.520963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.520988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.521112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.521137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.521301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.521329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.521529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.521553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.521725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.521750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.521895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.521921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.522124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.522152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.522302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.522334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.522478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.522505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.522646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.522670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.522797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.522821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.523002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.523029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.523202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.523230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.523400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.523424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.523589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.523616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.523777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.523805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.523967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.523995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.524170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.524195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.524315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.524339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.524469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.524493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.524643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.524667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.524835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.524859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.525005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.525047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.525209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.525237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.525431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.525458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.525632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.525656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.214 qpair failed and we were unable to recover it. 00:24:20.214 [2024-07-24 19:06:57.525826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.214 [2024-07-24 19:06:57.525854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.526049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.526076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.526249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.526277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.526422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.526447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.526595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.526636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.526779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.526806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.526986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.527011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.527169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.527195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.527326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.527351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.527480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.527506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.527631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.527657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.527804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.527829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.527983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.528008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.528177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.528206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.528376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.528404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.528577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.528601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.528749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.528774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.528899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.528923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.529107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.529135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.529280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.529307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.529458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.529501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.529630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.529657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.529853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.529880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.530050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.530075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.530247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.530275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.530415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.530443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.530613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.530641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.530811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.530836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.531029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.531056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.531248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.531274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.531430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.531456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.531609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.531633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.531803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.531832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.532020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.532048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.532197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.532222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.532346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.532371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.532552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.532595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.532756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.532784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.215 qpair failed and we were unable to recover it. 00:24:20.215 [2024-07-24 19:06:57.532961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.215 [2024-07-24 19:06:57.532986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.533141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.533166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.533311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.533336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.533465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.533489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.533640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.533666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.533826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.533851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.534047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.534075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.534249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.534275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.534413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.534438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.534610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.534634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.534780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.534805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.534921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.534950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.535085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.535116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.535239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.535264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.535411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.535452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.535596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.535625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.535786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.535815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.536007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.536032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.536251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.536307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.536484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.536510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.536664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.536704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.536875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.536899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.537053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.537095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.537282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.537306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.537460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.537485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.537639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.537664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.537817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.537842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.538017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.538042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.538202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.538230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.538403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.538427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.538558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.538583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.538736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.538761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.538963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.538990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.539153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.539177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.216 [2024-07-24 19:06:57.539396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.216 [2024-07-24 19:06:57.539450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.216 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.539586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.539612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.539802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.539829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.540006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.540031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.540157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.540203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.540404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.540428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.540619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.540646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.540817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.540842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.541039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.541066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.541253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.541279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.541431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.541455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.541582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.541606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.541759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.541800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.541934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.541961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.542116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.542144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.542318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.542342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.542489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.542512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.542690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.542717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.542890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.542918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.543068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.543093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.543257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.543299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.543490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.543517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.543661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.543689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.543862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.543886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.544092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.544128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.544293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.544320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.544472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.544496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.544667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.544692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.544841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.544865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.544991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.545016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.545221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.545250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.217 [2024-07-24 19:06:57.545416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.217 [2024-07-24 19:06:57.545445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.217 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.545604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.545631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.545797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.545824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.545986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.546014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.546158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.546182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.546335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.546378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.546584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.546608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.546762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.546803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.547006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.547030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.547183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.547207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.547352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.547395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.547537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.547565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.547741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.547765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.547911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.547935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.548088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.548118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.548299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.548326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.548476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.548500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.548700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.548728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.548894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.548922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.549115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.549143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.549324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.549349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.549502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.549526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.549718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.549745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.549899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.549926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.550100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.550131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.550328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.550378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.550521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.550545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.550674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.550699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.550856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.550880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.551026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.551066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.551264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.551290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.551422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.551447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.551600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.551625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.551845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.551896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.552036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.552063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.552231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.552257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.552404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.552427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.552554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.552596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.218 [2024-07-24 19:06:57.552794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.218 [2024-07-24 19:06:57.552822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.218 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.552991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.553018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.553162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.553187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.553342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.553372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.553563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.553590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.553780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.553807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.553979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.554004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.554195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.554222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.554392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.554415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.554564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.554588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.554743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.554768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.554886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.554910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.555087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.555122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.555325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.555351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.555506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.555530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.555703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.555729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.555921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.555948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.556143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.556171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.556346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.556371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.556521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.556545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.556718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.556742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.556892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.556917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.557038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.557062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.557245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.557288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.557465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.557489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.557660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.557702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.557882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.557907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.558055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.558079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.558235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.558260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.558430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.558458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.558655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.558683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.558904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.219 [2024-07-24 19:06:57.558958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.219 qpair failed and we were unable to recover it. 00:24:20.219 [2024-07-24 19:06:57.559139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.559164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.559297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.559321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.559473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.559498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.559670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.559695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.559815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.559839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.559964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.559989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.560176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.560202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.560351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.560391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.560590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.560614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.560766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.560791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.560942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.560967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.561146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.561174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.561380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.561406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.561555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.561595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.561772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.561796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.561947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.561971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.562152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.562177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.562334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.562361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.562537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.562560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.562732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.562757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.562902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.562943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.563113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.563141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.563306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.563330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.563528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.563555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.563698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.563725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.563911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.563943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.564113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.564138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.564287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.564311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.564465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.564505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.564695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.564723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.564872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.564896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.565022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.565047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.565249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.565275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.565444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.565472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.565634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.565659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.565807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.565854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.220 [2024-07-24 19:06:57.566029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.220 [2024-07-24 19:06:57.566053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.220 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.566202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.566228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.566378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.566403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.566584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.566610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.566812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.566836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.566984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.567025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.567173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.567198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.567351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.567376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.567589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.567613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.567760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.567800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.567948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.567973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.568128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.568154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.568301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.568325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.568533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.568559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.568707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.568732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.568924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.568951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.569080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.569114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.569286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.569312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.569470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.569494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.569620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.569646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.569794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.569818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.569974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.570000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.570193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.570219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.570436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.570493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.570633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.570661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.570830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.570857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.571031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.571056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.571241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.571270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.571405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.571433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.571556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.571583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.571768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.571794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.571962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.571991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.572125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.572153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.572323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.572350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.572502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.572528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.221 [2024-07-24 19:06:57.572713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.221 [2024-07-24 19:06:57.572754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.221 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.572923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.572949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.573087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.573121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.573321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.573346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.573488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.573515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.573709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.573736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.573908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.573931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.574088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.574120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.574322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.574350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.574510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.574538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.574675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.574703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.574887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.574911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.575085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.575132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.575334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.575359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.575479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.575503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.575682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.575706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.575831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.575872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.576019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.576045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.576230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.576255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.576383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.576408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.576561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.576585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.576738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.576779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.576911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.576943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.577098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.577128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.577261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.577301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.577481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.577508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.577659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.577701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.577855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.577879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.578029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.578070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.578222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.578247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.578369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.222 [2024-07-24 19:06:57.578394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.222 qpair failed and we were unable to recover it. 00:24:20.222 [2024-07-24 19:06:57.578519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.578543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.578715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.578740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.578866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.578891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.579096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.579130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.579331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.579355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.579576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.579628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.579805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.579829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.579977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.580002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.580174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.580199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.580345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.580371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.580540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.580567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.580744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.580771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.580946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.580971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.581099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.581148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.581290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.581318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.581449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.581475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.581639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.581663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.581823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.581850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.582039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.582070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.582267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.582295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.582440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.582464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.582618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.582643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.582815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.582840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.583018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.583045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.583224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.583249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.583401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.583426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.583600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.583627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.583831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.583856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.583977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.584001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.584132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.584175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.584307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.584333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.584498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.584525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.584701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.584727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.584882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.584907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.585060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.585084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.585240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.585264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.585392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.585417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.223 [2024-07-24 19:06:57.585562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.223 [2024-07-24 19:06:57.585586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.223 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.585736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.585776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.585959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.585986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.586135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.586161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.586289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.586313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.586437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.586463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.586609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.586635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.586814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.586838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.587010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.587042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.587210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.587238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.587411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.587437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.587558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.587582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.587740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.587780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.587983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.588008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.588132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.588157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.588307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.588331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.588480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.588505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.588669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.588693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.588838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.588862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.589013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.589038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.589211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.589239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.589404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.589431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.589618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.589643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.589772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.589797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.589924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.589966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.590131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.590158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.590326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.590355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.590542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.590567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.590717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.590744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.590935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.590962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.591130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.591158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.591312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.591337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.591458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.591483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.591661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.591689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.591859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.591883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.592012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.224 [2024-07-24 19:06:57.592036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.224 qpair failed and we were unable to recover it. 00:24:20.224 [2024-07-24 19:06:57.592197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.592239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.592386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.592414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.592555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.592582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.592773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.592796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.592922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.592963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.593130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.593158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.593303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.593330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.593470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.593494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.593650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.593674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.593826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.593850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.594000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.594024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.594186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.594212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.594343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.594367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.594541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.594574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.594716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.594744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.594892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.594917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.595045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.595069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.595229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.595255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.595376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.595400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.595560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.595583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.595715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.595740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.595861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.595886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.596027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.596054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.596217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.596244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.596374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.596399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.596524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.596550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.596744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.596772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.596950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.596975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.597152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.597181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.597314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.597340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.597511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.597536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.597686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.597710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.597837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.597861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.597992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.598016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.598162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.225 [2024-07-24 19:06:57.598191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.225 qpair failed and we were unable to recover it. 00:24:20.225 [2024-07-24 19:06:57.598349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.598373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.598528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.598553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.598706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.598734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.598880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.598904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.599035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.599060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.599256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.599289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.599457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.599484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.599625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.599652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.599812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.599837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.599964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.599988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.600144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.600169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.600344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.600371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.600521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.600545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.600673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.600698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.600849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.600873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.600998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.601021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.601168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.601194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.601347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.601372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.601499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.601524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.601667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.601694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.601865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.601889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.602034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.602058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.602183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.602208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.602356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.602380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.602508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.602533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.602705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.602731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.602901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.602926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.603121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.603150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.603302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.603327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.603481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.603522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.603732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.603756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.603879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.603904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.604092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.604139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.604286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.604314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.604504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.604531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.604707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.604735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.604880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.226 [2024-07-24 19:06:57.604905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.226 qpair failed and we were unable to recover it. 00:24:20.226 [2024-07-24 19:06:57.605058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.605099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.605264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.605288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.605434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.605459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.605658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.605683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.605875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.605902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.606028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.606055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.606224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.606253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.606427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.606452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.606583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.606608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.606740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.606765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.606938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.606966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.607114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.607140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.607268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.607292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.607494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.607522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.607685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.607712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.607880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.607904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.608048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.608076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.608255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.608281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.608456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.608483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.608663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.608688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.608855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.608882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.609051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.609079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.609253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.609297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.609495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.609523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.609678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.609723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.609868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.609897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.610073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.610100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.610264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.610291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.227 [2024-07-24 19:06:57.610471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.227 [2024-07-24 19:06:57.610522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.227 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.610688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.610718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.610882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.610911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.611115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.611142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.611384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.611413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.611612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.611638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.611766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.611807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.611981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.612007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.612158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.612184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.612319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.612362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.612605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.612634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.612832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.612857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.613047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.613076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.613223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.613248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.613372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.613397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.613574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.613600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.613751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.613793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.613939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.613967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.614124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.614151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.614305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.614332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.614505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.614534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.614697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.614731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.614906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.614931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.615083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.615115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.615284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.615313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.615478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.615506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.615682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.615708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.615885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.615911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.616035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.616061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.616223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.616250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.616396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.616426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.616587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.616612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.616786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.616811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.616999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.617023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.617184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.617210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.617338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.617363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.617485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.617509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.617664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.228 [2024-07-24 19:06:57.617688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.228 qpair failed and we were unable to recover it. 00:24:20.228 [2024-07-24 19:06:57.617852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.617900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.618073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.618099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.618243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.618267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.618392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.618417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.618538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.618563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.618737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.618762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.618957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.618985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.619153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.619181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.619314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.619342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.619488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.619513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.619663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.619710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.619914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.619939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.620090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.620140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.620283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.620308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.620421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.620446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.620599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.620626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.620827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.620852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.620994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.621021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.621188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.621213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.621331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.621356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.621542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.621569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.621731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.621759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.621902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.621930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.622100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.622134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.622286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.622310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.622432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.622455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.622608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.622633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.622755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.622778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.622930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.622957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.623122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.623148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.623270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.623294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.623455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.623481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.623644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.623693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.623845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.623869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.624018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.624059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.624229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.624257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.229 [2024-07-24 19:06:57.624390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.229 [2024-07-24 19:06:57.624418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.229 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.624591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.624620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.624792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.624819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.624968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.624992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.625120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.625145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.625301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.625325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.625472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.625497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.625645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.625670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.625845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.625870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.626021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.626046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.626207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.626236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.626418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.626442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.626589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.626613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.626738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.626762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.626887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.626911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.627087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.627123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.627292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.627319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.627520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.627544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.627731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.627777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.627917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.627946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.628084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.628117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.628281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.628306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.628434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.628475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.628628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.628653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.628776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.628802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.628933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.628959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.629119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.629160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.629332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.629360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.629551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.629602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.629781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.629806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.629957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.629981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.630126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.630154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.630326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.630351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.630500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.630525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.630670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.630702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.630895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.630923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.230 [2024-07-24 19:06:57.631093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.230 [2024-07-24 19:06:57.631133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.230 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.631288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.631313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.631445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.631469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.631600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.631626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.631779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.631820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.631993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.632017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.632194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.632222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.632372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.632400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.632569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.632596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.632770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.632794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.632921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.632962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.633116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.633144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.633308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.633335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.633487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.633512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.633669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.633694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.633844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.633868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.634020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.634047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.634201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.634227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.634377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.634401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.634527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.634551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.634693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.634721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.634888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.634914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.635063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.635112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.635258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.635285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.635466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.635491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.635617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.635641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.635769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.635808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.635972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.635999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.636170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.636195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.636317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.636341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.636484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.636508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.636654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.636694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.636887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.636914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.637069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.637094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.637247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.637272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.637426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.637468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.637627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.637654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.637827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.637852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.231 [2024-07-24 19:06:57.638017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.231 [2024-07-24 19:06:57.638044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.231 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.638198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.638223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.638427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.638454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.638624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.638649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.638775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.638800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.638949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.638973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.639138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.639180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.639306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.639330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.639482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.639506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.639670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.639695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.639869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.639896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.640091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.640135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.640267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.640291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.640443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.640484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.640651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.640678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.640823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.640847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.640999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.641024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.641170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.641196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.641339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.641363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.641530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.641556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.641708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.641734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.641935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.641962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.642143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.642172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.642328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.642352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.642502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.642527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.642677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.642701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.642851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.642893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.643078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.643109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.643287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.232 [2024-07-24 19:06:57.643315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.232 qpair failed and we were unable to recover it. 00:24:20.232 [2024-07-24 19:06:57.643474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.643501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.643665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.643693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.643858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.643881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.644077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.644120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.644289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.644318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.644453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.644480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.644653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.644677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.644870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.644897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.645073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.645098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.645259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.645303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.645477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.645502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.645658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.645683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.645809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.645850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.646041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.646067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.646252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.646277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.646422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.646452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.646633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.646660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.646824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.646852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.647050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.647074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.647231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.647260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.647433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.647466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.647635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.647663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.647838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.647863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.647993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.648019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.648133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.648157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.648327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.648367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.648540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.648564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.648740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.648796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.648982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.649009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.649210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.649235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.649363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.649389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.649536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.649577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.649725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.649751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.649892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.649918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.650064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.650089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.650221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.650245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.650449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.650476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.650667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.233 [2024-07-24 19:06:57.650692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.233 qpair failed and we were unable to recover it. 00:24:20.233 [2024-07-24 19:06:57.650841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.650866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.651014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.651055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.651226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.651253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.651417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.651445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.651650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.651675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.651852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.651879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.652069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.652093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.652229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.652253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.652374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.652399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.652550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.652574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.652763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.652790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.652978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.653002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.653156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.653181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.653328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.653352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.653504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.653531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.653718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.653743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.653919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.653944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.654093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.654126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.654292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.654319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.654454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.654482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.654623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.654647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.654799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.654840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.654985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.655013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.655182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.655209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.655377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.655401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.655598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.655626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.655773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.655800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.655966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.655994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.656161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.656186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.656358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.656386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.656561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.656586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.656763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.656788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.656915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.656939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.657063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.657088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.657277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.657319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.234 [2024-07-24 19:06:57.657499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.234 [2024-07-24 19:06:57.657524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.234 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.657650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.657675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.657826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.657850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.657982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.658006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.658180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.658209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.658382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.658407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.658560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.658601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.658744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.658770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.658935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.658963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.659132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.659157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.659311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.659335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.659455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.659481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.659671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.659697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.659855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.659880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.660037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.660079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.660287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.660316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.660488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.660516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.660667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.660692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.660820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.660861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.661034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.661061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.661230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.661258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.661408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.661432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.661557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.661599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.661770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.661798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.661964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.661992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.662156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.662181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.662325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.662367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.662542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.662567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.662709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.662733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.662898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.662922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.663118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.663146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.663337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.663365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.663506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.663532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.663725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.663749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.663923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.663950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.235 [2024-07-24 19:06:57.664141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.235 [2024-07-24 19:06:57.664168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.235 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.664367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.664395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.664562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.664586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.664739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.664763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.664886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.664911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.665080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.665114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.665289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.665313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.665446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.665492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.665688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.665715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.665900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.665925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.666053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.666077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.666241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.666265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.666417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.666458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.666622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.666650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.666823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.666847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.666995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.667020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.667167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.667192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.667413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.667438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.667566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.667591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.667819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.667868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.668034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.668062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.668238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.668267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.668411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.668436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.668628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.668655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.668823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.668851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.669054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.669082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.669259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.669285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.669447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.669474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.669638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.669664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.669828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.669857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.670043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.670068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.670212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.670240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.670407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.670434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.670578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.670606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.670752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.670782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.670933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.670958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.671108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.671133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.236 qpair failed and we were unable to recover it. 00:24:20.236 [2024-07-24 19:06:57.671303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.236 [2024-07-24 19:06:57.671330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.671482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.671505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.671628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.671653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.671824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.671851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.672016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.672043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.672215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.672241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.672406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.672434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.672644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.672669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.672795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.672834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.673028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.673051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.673220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.673248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.673387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.673416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.673583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.673611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.673762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.673787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.673944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.673969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.674150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.674192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.674346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.674370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.674537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.674561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.674708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.674733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.674930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.674958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.675123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.675150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.675291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.675315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.675485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.675512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.675655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.675683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.675847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.675874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.676080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.676112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.676246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.676271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.676446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.676470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.676618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.676645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.237 [2024-07-24 19:06:57.676816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.237 [2024-07-24 19:06:57.676841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.237 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.676975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.677000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.677149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.677174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.677328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.677353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.677506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.677532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.677704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.677731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.677899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.677925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.678091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.678123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.678276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.678301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.678453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.678484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.678667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.678695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.678862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.678889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.679058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.679082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.679265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.679292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.679462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.679490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.679634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.679662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.679837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.679862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.680018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.680061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.680236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.680266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.680456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.680484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.680671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.680695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.680884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.680911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.681110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.681138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.681287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.681315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.681464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.681489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.681635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.681659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.681816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.681844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.682020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.682046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.682204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.682230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.682350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.682375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.682503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.682529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.682676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.682716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.682918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.682942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.683118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.683147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.683313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.683340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.683504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.683531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.683695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.683723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.683896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.683924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-24 19:06:57.684129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.238 [2024-07-24 19:06:57.684157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.684298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.684326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.684500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.684525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.684673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.684698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.684873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.684898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.685055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.685083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.685268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.685293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.685443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.685469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.685654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.685681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.685850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.685877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.686053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.686078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.686237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.686279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.686453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.686481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.686611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.686638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.686802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.686826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.686992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.687020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.687198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.687227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.687360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.687387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.687533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.687558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.687690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.687715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.687863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.687904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.688098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.688128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.688279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.688303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.688467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.688494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.688636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.688662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.688804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.688835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.689006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.689031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.689159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.689201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.689327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.689354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.689492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.689520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.689695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.689720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.689853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.689876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.690082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.690116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.690257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.690285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.690481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.690506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.690675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.690703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-24 19:06:57.690900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.239 [2024-07-24 19:06:57.690925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.691056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.691081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.691213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.691238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.691397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.691438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.691604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.691632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.691777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.691805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.691999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.692023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.692197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.692225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.692399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.692424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.692555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.692581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.692766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.692791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.692955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.692981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.693143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.693171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.693336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.693363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.693539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.693563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.693787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.693842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.694005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.694031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.694185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.694212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.694382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.694407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.694573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.694599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.694742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.694769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.694911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.694938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.695088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.695121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.695273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.695298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.695493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.695518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.695634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.695659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.695810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.695834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.696002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.696027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.696180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.696205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.696331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.696355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.696496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.696537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.696720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.696780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.696970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.697014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.697143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.697170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.697323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.697348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.697512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.697556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.697704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.697748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:20.240 qpair failed and we were unable to recover it. 00:24:20.240 [2024-07-24 19:06:57.697921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.240 [2024-07-24 19:06:57.697964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.698123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.698149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.698273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.698297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.698475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.698503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.698671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.698699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.698870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.698897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.699089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.699123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.699272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.699296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.699479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.699520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.699753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.699804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.699938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.699966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.700152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.700178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.700339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.700363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.700512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.700541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.700729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.700754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.700901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.700929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.701097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.701146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.701299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.701324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.701493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.701521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.701684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.701711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.701880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.701907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.702075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.702099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.702256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.702281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.702434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.702459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.702634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.702661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.702984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.703033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.703182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.703208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.703355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.703396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.703531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.703558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.703730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.703757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.703931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.703958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.241 qpair failed and we were unable to recover it. 00:24:20.241 [2024-07-24 19:06:57.704100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.241 [2024-07-24 19:06:57.704132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.704311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.704336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.704482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.704507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.704760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.704811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.705006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.705033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.705208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.705234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.705367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.705392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.705587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.705615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.705784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.705812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.705985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.706012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.706183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.706208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.706341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.706365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.706567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.706595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.706763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.706791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.707043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.707071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.707223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.707249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.707401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.707430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.707637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.707665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.707887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.707942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.708109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.708134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.708289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.708316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.708512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.708540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.708705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.708730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.708860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.708884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.709010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.709035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.709232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.709258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.709382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.709408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.709561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.709586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.709736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.709761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.709943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.709968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.710147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.710172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.710321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.710372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.710511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.710539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.710714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.710739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.710907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.242 [2024-07-24 19:06:57.710932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.242 qpair failed and we were unable to recover it. 00:24:20.242 [2024-07-24 19:06:57.711114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.711142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.711279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.711307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.711497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.711525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.711716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.711741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.711906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.711933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.712073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.712113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.712286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.712313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.712513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.712538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.712688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.712734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.712903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.712958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.713157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.713186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.713336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.713361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.713512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.713555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.713759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.713784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.713910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.713936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.714087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.714118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.714266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.714294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.714455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.714482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.714655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.714682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.714847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.714871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.715035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.715062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.715241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.715267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.715426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.715451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.715580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.715604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.715754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.715779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.715935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.715964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.716130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.716158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.716362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.716387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.716532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.716559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.716699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.716728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.716867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.716894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.717109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.717134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.243 [2024-07-24 19:06:57.717261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.243 [2024-07-24 19:06:57.717286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.243 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.717431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.717456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.717636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.717663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.717837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.717866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.718029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.718057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.718244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.718269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.718415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.718440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.718628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.718653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.718778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.718821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.718984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.719011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.719136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.719165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.719310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.719335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.719464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.719488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.719658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.719686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.719889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.719913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.720068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.720092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.720234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.720261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.720445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.720470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.720623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.720649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.720856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.720881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.721053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.721080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.721244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.721269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.721463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.721490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.721634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.721659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.721790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.721815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.721960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.721986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.722159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.722185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.722331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.722356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.722504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.722529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.722680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.722704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.722859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.722886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.723068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.723093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.723272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.723300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.723435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.723462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.723624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.723651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.723817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.723842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.724038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.724066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.244 [2024-07-24 19:06:57.724264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.244 [2024-07-24 19:06:57.724292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.244 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.724470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.724495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.724646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.724671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.724793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.724817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.724963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.724991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.725155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.725184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.725352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.725378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.725552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.725579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.725750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.725778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.725944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.725972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.726137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.726162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.726324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.726351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.726512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.726539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.726710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.726738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.726913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.726938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.727120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.727145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.727318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.727345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.727486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.727514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.727682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.727709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.727827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.727868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.728028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.728056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.728245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.728270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.728425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.728450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.728659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.728716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.728903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.728930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.729093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.729128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.729293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.729318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.729462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.729486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.729632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.729660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.729849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.729876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.730027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.730052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.730184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.730226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.730439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.730464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.730636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.730663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.730864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.730893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.731090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.731135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.245 [2024-07-24 19:06:57.731273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.245 [2024-07-24 19:06:57.731301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.245 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.731486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.731514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.731692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.731717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.731849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.731874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.732049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.732074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.732240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.732269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.732440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.732465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.732680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.732741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.732910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.732938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.733108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.733136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.733316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.733341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.733512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.733573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.733768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.733796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.733935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.733963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.734137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.734163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.734292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.734317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.734493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.734536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.734671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.734698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.734845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.734870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.735016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.735040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.735206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.735234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.735396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.735424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.735597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.735622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.735785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.735813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.735953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.735982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.736136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.736165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.736343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.736368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.736500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.736525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.736697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.736739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.736902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.736929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.737096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.737127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.737302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.737330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.737496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.737523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.737669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.737693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.246 [2024-07-24 19:06:57.737868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.246 [2024-07-24 19:06:57.737893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.246 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.738082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.738118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.738258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.738285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.738452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.738480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.738667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.738692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.738865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.738893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.739058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.739086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.739279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.739305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.739432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.739457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.739592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.739633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.739850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.739875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.740021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.740046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.740208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.740234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.740400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.740428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.740570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.740597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.740772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.740797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.740923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.740949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.741100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.741131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.741261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.741286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.741404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.741429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.741577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.741602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.741771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.741798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.741966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.741993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.742199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.742227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.742375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.742400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.742551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.742594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.742788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.742815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.742951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.742979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.743126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.743152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.743301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.743325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.743450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.743475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.743632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.743661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.743859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.743885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.744031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.744059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.744266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.744292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.744413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.744438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.744628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.744653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.744847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.744875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.745066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.745093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.745245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.745273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.745474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.247 [2024-07-24 19:06:57.745498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.247 qpair failed and we were unable to recover it. 00:24:20.247 [2024-07-24 19:06:57.745667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.745695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.745839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.745866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.746053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.746080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.746229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.746254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.746407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.746450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.746620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.746645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.746821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.746845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.746997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.747022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.747152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.747177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.747298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.747323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.747523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.747551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.747714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.747739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.747870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.747895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.748072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.748096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.748251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.748279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.748488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.748513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.748721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.748772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.748966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.748991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.749139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.749168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.749316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.749342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.749570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.749626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.749783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.749811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.749938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.749966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.750138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.750163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.750314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.750356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.750523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.750551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.750714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.750742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.750894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.750920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.751054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.751079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.751303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.751329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.751544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.751569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.751697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.751722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.751877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.751901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.752075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.752110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.752311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.752336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.752466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.752492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.752667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.752692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.752820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.752845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.752993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.753018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.753178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.753203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.753384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.753437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.753628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.248 [2024-07-24 19:06:57.753653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.248 qpair failed and we were unable to recover it. 00:24:20.248 [2024-07-24 19:06:57.753775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.753800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.753953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.753979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.754147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.754175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.754359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.754391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.754545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.754570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.754722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.754747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.754903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.754946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.755124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.755152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.755321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.755349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.755521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.755546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.755666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.755706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.755865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.755892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.756090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.756121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.756265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.756290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.756411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.756453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.756618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.756645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.756833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.756861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.757014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.757038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.757195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.757221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.757386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.757411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.757561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.757587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.757743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.757768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.757920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.757962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.758089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.758123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.758281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.758309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.758475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.758499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.758652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.758677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.758830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.758855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.759055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.759079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.759236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.759262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.759390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.759436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.759569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.759597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.759756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.759783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.759960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.759986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.760156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.760185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.760354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.760381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.760528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.760552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.760701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.760726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.760852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.760895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.761086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.761121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.761282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.761310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.761483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.761507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.761694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.761746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.761917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.249 [2024-07-24 19:06:57.761944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.249 qpair failed and we were unable to recover it. 00:24:20.249 [2024-07-24 19:06:57.762143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.762169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.762295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.762320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.762469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.762493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.762666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.762691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.762843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.762886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.763029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.763054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.763172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.763198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.763328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.763352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.763553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.763581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.763753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.763778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.763902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.763944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.764109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.764137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.764307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.764334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.764506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.764531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.764719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.764768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.764904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.764931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.765118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.765146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.765340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.765365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.765542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.765569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.765734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.765763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.765902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.765930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.766131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.766157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.766302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.766330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.766472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.766500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.766674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.766701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.766899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.766923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.767075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.767117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.767263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.767291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.767428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.767456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.767631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.767656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.767813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.767838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.767958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.767983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.768117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.768143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.768293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.768318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.768440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.768465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.768611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.768638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.768829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.768856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.769022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.769047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.769214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.769242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.769407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.769435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.769601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.769629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.250 qpair failed and we were unable to recover it. 00:24:20.250 [2024-07-24 19:06:57.769805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.250 [2024-07-24 19:06:57.769830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.770021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.770048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.770185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.770211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.770366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.770391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.770575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.770600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.770837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.770864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.771038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.771063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.771216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.771241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.771394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.771419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.771591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.771619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.771800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.771824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.771977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.772002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.772173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.772199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.772415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.772447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.772616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.772646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.772807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.772834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.773010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.773034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.773208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.773235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.773377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.773405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.773545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.773573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.773749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.773773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.773921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.773962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.774143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.774169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.774295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.774321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.774510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.774535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.774667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.251 [2024-07-24 19:06:57.774691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.251 qpair failed and we were unable to recover it. 00:24:20.251 [2024-07-24 19:06:57.774815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.774840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.775014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.775039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.775169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.775195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.775347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.775388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.775557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.775584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.775752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.775779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.775923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.775948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.776094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.776125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.776278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.776303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.776453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.776478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.776644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.776669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.776822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.776847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.776998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.777040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.777230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.777258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.777409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.777438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.777594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.777620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.777799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.777827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.778001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.778030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.778229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.778255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.778404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.778429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.778554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.778579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.778741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.778768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.778940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.778966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.779171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.779199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.779341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.779369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.779560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.779587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.779761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.779786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.252 [2024-07-24 19:06:57.779914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.252 [2024-07-24 19:06:57.779939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.252 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.780098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.780129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.780283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.780308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.780440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.780466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.780600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.780625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.780749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.780775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.780965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.780990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.781146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.781172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.781286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.781330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.781468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.781496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.781638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.781666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.781815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.781840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.781967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.781991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.782167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.782197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.782373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.782398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.782529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.782554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.782700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.782743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.782890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.782916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.783043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.783068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.783197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.783222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.783373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.783416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.783553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.783582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.783746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.783774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.783932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.783957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.784136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.784192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.784353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.784380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.784544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.784572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.784763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.539 [2024-07-24 19:06:57.784788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.539 qpair failed and we were unable to recover it. 00:24:20.539 [2024-07-24 19:06:57.784943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4230 is same with the state(5) to be set 00:24:20.539 Read completed with error (sct=0, sc=8) 00:24:20.539 starting I/O failed 00:24:20.539 Read completed with error (sct=0, sc=8) 00:24:20.539 starting I/O failed 00:24:20.539 Read completed with error (sct=0, sc=8) 00:24:20.539 starting I/O failed 00:24:20.539 Read completed with error (sct=0, sc=8) 00:24:20.539 starting I/O failed 00:24:20.539 Read completed with error (sct=0, sc=8) 00:24:20.539 starting I/O failed 00:24:20.539 Read completed with error (sct=0, sc=8) 00:24:20.539 starting I/O failed 00:24:20.539 Read completed with error (sct=0, sc=8) 00:24:20.539 starting I/O failed 00:24:20.539 Read completed with error (sct=0, sc=8) 00:24:20.539 starting I/O failed 00:24:20.539 Read completed with error (sct=0, sc=8) 00:24:20.539 starting I/O failed 00:24:20.539 Read completed with error (sct=0, sc=8) 00:24:20.539 starting I/O failed 00:24:20.539 Read completed with error (sct=0, sc=8) 00:24:20.539 starting I/O failed 00:24:20.539 Read completed with error (sct=0, sc=8) 00:24:20.540 starting I/O failed 00:24:20.540 Read completed with error (sct=0, sc=8) 00:24:20.540 starting I/O failed 00:24:20.540 Read completed with error (sct=0, sc=8) 00:24:20.540 starting I/O failed 00:24:20.540 Write completed with error (sct=0, sc=8) 00:24:20.540 starting I/O failed 00:24:20.540 Read completed with error (sct=0, sc=8) 00:24:20.540 starting I/O failed 00:24:20.540 Write completed with error (sct=0, sc=8) 00:24:20.540 starting I/O failed 00:24:20.540 Write completed with error (sct=0, sc=8) 00:24:20.540 starting I/O failed 00:24:20.540 Read completed with error (sct=0, sc=8) 00:24:20.540 starting I/O failed 00:24:20.540 Read completed with error (sct=0, sc=8) 00:24:20.540 starting I/O failed 00:24:20.540 Read completed with error (sct=0, sc=8) 00:24:20.540 starting I/O failed 00:24:20.540 Read completed with error (sct=0, sc=8) 00:24:20.540 starting I/O failed 00:24:20.540 Write completed with error (sct=0, sc=8) 00:24:20.540 starting I/O failed 00:24:20.540 Write completed with error (sct=0, sc=8) 00:24:20.540 starting I/O failed 00:24:20.540 Read completed with error (sct=0, sc=8) 00:24:20.540 starting I/O failed 00:24:20.540 Write completed with error (sct=0, sc=8) 00:24:20.540 starting I/O failed 00:24:20.540 Write completed with error (sct=0, sc=8) 00:24:20.540 starting I/O failed 00:24:20.540 Read completed with error (sct=0, sc=8) 00:24:20.540 starting I/O failed 00:24:20.540 Read completed with error (sct=0, sc=8) 00:24:20.540 starting I/O failed 00:24:20.540 Read completed with error (sct=0, sc=8) 00:24:20.540 starting I/O failed 00:24:20.540 Read completed with error (sct=0, sc=8) 00:24:20.540 starting I/O failed 00:24:20.540 Write completed with error (sct=0, sc=8) 00:24:20.540 starting I/O failed 00:24:20.540 [2024-07-24 19:06:57.785370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:20.540 [2024-07-24 19:06:57.785571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.785615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.785801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.785830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.785977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.786039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.786223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.786250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.786387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.786413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.786708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.786758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.787034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.787085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.787274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.787300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.787473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.787502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.787648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.787677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.787854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.787880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.788075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.788109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.788274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.788300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.788453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.788480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.788676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.788731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.788919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.788970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.789125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.789152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.789287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.789314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.789444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.789469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.789634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.789658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.789815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.789856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.790003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.790028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.790176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.790202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.790322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.790365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.540 [2024-07-24 19:06:57.790504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.540 [2024-07-24 19:06:57.790532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.540 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.790696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.790722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.790845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.790869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.791021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.791046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.791172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.791198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.791315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.791340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.791516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.791544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.791715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.791740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.791895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.791920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.792075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.792125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.792317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.792343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.792510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.792559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.792811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.792862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.793026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.793051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.793229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.793254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.793404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.793429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.793550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.793575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.793703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.793729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.793883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.793923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.794131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.794157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.794297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.794322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.794492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.794517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.794690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.794715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.794932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.794957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.795086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.795122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.795270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.795295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.795466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.795494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.795655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.795682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.795827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.795853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.796003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.796028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.796234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.796262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.796441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.796466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.796611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.796636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.796838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.796865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.797062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.797087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.541 [2024-07-24 19:06:57.797307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.541 [2024-07-24 19:06:57.797351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.541 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.797524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.797561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.797734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.797760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.797954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.798043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.798234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.798262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.798419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.798445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.798611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.798662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.798828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.798858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.799017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.799042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.799198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.799225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.799373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.799401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.799574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.799600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.799856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.799909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.800098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.800132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.800277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.800302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.800481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.800523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.800663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.800692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.800889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.800913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.801110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.801139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.801282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.801310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.801455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.801479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.801633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.801658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.801814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.801840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.801971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.801996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.802138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.802178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.802343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.802371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.802558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.802585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.802785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.802814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.803014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.803043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.803207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.803233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.803414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.803441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.803622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.803651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.803826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.803851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.804025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.804053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.804236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.804262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.804414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.804439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.804577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.542 [2024-07-24 19:06:57.804605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.542 qpair failed and we were unable to recover it. 00:24:20.542 [2024-07-24 19:06:57.804783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.804807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.804927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.804953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.805107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.805149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.805319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.805347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.805514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.805539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.805696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.805737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.805909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.805936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.806120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.806146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.806276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.806303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.806480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.806523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.806708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.806733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.806866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.806891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.807042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.807084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.807243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.807268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.807460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.807521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.807662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.807690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.807841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.807866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.808015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.808040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.808193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.808241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.808415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.808441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.808595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.808620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.808778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.808803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.808923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.808949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.809121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.809161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.809294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.809322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.809476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.809502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.809657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.809685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.809805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.809832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.810009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.810035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.810189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.810219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.810383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.810411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.810561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.810586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.810782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.810833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.811006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.811034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.811200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.811226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.811428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.811484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.543 qpair failed and we were unable to recover it. 00:24:20.543 [2024-07-24 19:06:57.811673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.543 [2024-07-24 19:06:57.811701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.811847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.811874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.812049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.812079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.812260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.812290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.812427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.812452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.812607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.812649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.812840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.812869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.813015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.813041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.813183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.813210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.813420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.813465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.813645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.813673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.813814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.813844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.814009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.814038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.814198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.814225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.814375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.814400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.814553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.814596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.814736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.814761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.814935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.814978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.815188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.815230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.815435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.815461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.815662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.815712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.815901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.815954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.816133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.816159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.816311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.816336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.816549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.816574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.816746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.816771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.816986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.817011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.817142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.817167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.817315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.817340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.817514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.544 [2024-07-24 19:06:57.817539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.544 qpair failed and we were unable to recover it. 00:24:20.544 [2024-07-24 19:06:57.817716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.817745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.817911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.817936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.818115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.818144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.818288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.818315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.818482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.818507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.818654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.818697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.818880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.818935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.819114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.819141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.819330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.819358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.819625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.819677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.819831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.819857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.820034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.820076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.820268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.820294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.820448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.820474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.820629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.820656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.820884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.820910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.821063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.821089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.821224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.821249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.821425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.821450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.821600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.821631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.821783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.821809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.821981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.822024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.822177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.822203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.822357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.822400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.822535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.822565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.822744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.822769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.822912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.822940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.823093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.823126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.823315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.823340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.823510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.823539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.823755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.823806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.824003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.824029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.824161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.545 [2024-07-24 19:06:57.824187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.545 qpair failed and we were unable to recover it. 00:24:20.545 [2024-07-24 19:06:57.824392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.824421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.824618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.824644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.824817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.824845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.825000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.825028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.825201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.825227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.825423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.825452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.825608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.825633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.825755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.825782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.825935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.825961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.826169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.826199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.826355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.826382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.826512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.826538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.826692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.826718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.826889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.826928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.827093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.827130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.827268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.827295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.827484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.827530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.827687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.827731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.827899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.827931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.828100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.828132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.828259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.828285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.828432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.828462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.828650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.828679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.828873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.828902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.829070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.829099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.829282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.829308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.829496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.829530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.829688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.829730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.829896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.829924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.830064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.830092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.830248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.830274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.830415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.830445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.830587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.830615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.830757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.830786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.830926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.830955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.546 qpair failed and we were unable to recover it. 00:24:20.546 [2024-07-24 19:06:57.831122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.546 [2024-07-24 19:06:57.831164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.831296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.831322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.831472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.831497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.831621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.831646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.831812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.831837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.831985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.832013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.832195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.832221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.832370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.832415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.832586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.832611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.832776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.832804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.832969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.832997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.833160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.833186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.833315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.833340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.833504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.833533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.833701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.833730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.833900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.833928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.834091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.834125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.834303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.834329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.834478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.834511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.834651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.834679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.834880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.834908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.835045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.835073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.835241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.835267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.835461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.835517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.835656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.835684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.835828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.835856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.836022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.836050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.836230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.836256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.836448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.836476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.836646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.836700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.836894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.836923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.837070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.837095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.837233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.837258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.837378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.837420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.837605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.547 [2024-07-24 19:06:57.837632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.547 qpair failed and we were unable to recover it. 00:24:20.547 [2024-07-24 19:06:57.837871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.837899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.838088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.838122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.838316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.838341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.838509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.838549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.838714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.838741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.838904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.838933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.839122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.839148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.839272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.839297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.839447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.839487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.839651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.839680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.839836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.839879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.840074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.840108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.840281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.840305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.840448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.840473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.840621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.840650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.840804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.840846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.840993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.841018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.841143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.841169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.841297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.841323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.841474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.841515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.841685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.841714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.841888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.841916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.842073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.842098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.842225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.842254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.842378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.842403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.842567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.842596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.842765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.842793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.843024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.843052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.843212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.843238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.843360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.843384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.843561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.843589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.843747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.843775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.843916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.843943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.844126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.844152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.548 qpair failed and we were unable to recover it. 00:24:20.548 [2024-07-24 19:06:57.844304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.548 [2024-07-24 19:06:57.844331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.844485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.844513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.844679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.844703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.844860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.844886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.845036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.845062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.845219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.845244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.845402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.845429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.845565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.845595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.845767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.845792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.845986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.846013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.846175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.846203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.846356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.846381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.846514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.846539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.846718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.846747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.846883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.846907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.847060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.847085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.847216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.847241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.847369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.847394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.847565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.847595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.847757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.847785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.847937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.847962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.848096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.848134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.848284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.848326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.848520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.848544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.848697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.848722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.549 qpair failed and we were unable to recover it. 00:24:20.549 [2024-07-24 19:06:57.848849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.549 [2024-07-24 19:06:57.848876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.849049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.849075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.849210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.849253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.849419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.849446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.849626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.849655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.849792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.849832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.850001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.850029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.850178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.850203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.850402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.850430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.850606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.850633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.850754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.850778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.850923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.850964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.851125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.851154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.851328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.851354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.851554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.851582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.851717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.851746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.851928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.851953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.852083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.852114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.852274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.852299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.852476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.852500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.852622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.852647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.852778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.852804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.852980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.853005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.853153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.853182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.853312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.853342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.853519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.853543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.853693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.853722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.853903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.853930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.854055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.854079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.854253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.854281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.854451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.854481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.854655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.854682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.854811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.854853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.855028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.855059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.855233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.550 [2024-07-24 19:06:57.855259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.550 qpair failed and we were unable to recover it. 00:24:20.550 [2024-07-24 19:06:57.855377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.855419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.855601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.855630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.855789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.855813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.856003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.856031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.856195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.856224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.856414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.856444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.856635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.856665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.856846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.856872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.857004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.857029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.857154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.857187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.857337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.857363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.857490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.857516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.857701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.857730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.857867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.857894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.858057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.858082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.858270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.858300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.858470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.858499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.858668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.858692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.858864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.858892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.859051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.859076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.859226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.859254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.859431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.859458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.859624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.859651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.859849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.859875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.860027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.860070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.860229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.860254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.860427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.860452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.860580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.860606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.860751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.860776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.860903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.860928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.861129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.861158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.861298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.861326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.861467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.861492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.861649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.861692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.861881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.861910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.862053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.862078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.862234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.862260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.551 qpair failed and we were unable to recover it. 00:24:20.551 [2024-07-24 19:06:57.862460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.551 [2024-07-24 19:06:57.862489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.862688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.862713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.862908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.862936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.863094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.863136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.863292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.863317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.863445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.863470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.863649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.863675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.863837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.863863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.864015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.864059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.864207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.864237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.864434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.864459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.864630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.864660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.864826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.864861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.865058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.865082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.865218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.865244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.865427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.865456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.865620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.865645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.865842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.865870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.866056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.866081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.866241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.866268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.866419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.866443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.866620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.866647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.866814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.866840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.867007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.867048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.867243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.867270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.867420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.867444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.867597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.867624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.867820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.867849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.868041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.868067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.868210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.868239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.868403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.868430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.868610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.868637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.868790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.868815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.868967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.868991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.869143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.869170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.552 qpair failed and we were unable to recover it. 00:24:20.552 [2024-07-24 19:06:57.869300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.552 [2024-07-24 19:06:57.869327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.869476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.869500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.869676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.869702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.869849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.869877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.870071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.870099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.870296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.870320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.870468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.870509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.870649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.870676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.870815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.870840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.871011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.871052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.871247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.871273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.871449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.871474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.871622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.871648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.871803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.871844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.872045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.872069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.872210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.872237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.872373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.872399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.872551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.872580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.872750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.872777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.872946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.872974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.873125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.873151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.873344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.873371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.873516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.873544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.873716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.873741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.873864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.873906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.874092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.874127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.874310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.874335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.874468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.874494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.874619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.874644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.874829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.874855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.875023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.875051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.875249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.875275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.875404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.875428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.875550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.875590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.875746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.875774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.875949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.875973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.876146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.876189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.876361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.553 [2024-07-24 19:06:57.876389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.553 qpair failed and we were unable to recover it. 00:24:20.553 [2024-07-24 19:06:57.876562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.876588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.876734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.876763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.876909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.876938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.877114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.877140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.877298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.877322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.877470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.877495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.877652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.877678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.877836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.877861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.878009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.878033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.878222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.878248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.878449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.878477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.878647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.878675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.878820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.878845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.879023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.879051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.879199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.879224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.879351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.879377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.879543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.879572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.879715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.879742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.879894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.879919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.880048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.880078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.880244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.880269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.880393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.880419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.880616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.880644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.880795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.880823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.880997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.881021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.881189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.881219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.881394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.881421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.881541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.881566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.881716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.881741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.881920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.881950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.882110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.554 [2024-07-24 19:06:57.882137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.554 qpair failed and we were unable to recover it. 00:24:20.554 [2024-07-24 19:06:57.882304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.882332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.882508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.882532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.882688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.882713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.882864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.882889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.883021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.883047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.883175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.883200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.883351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.883376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.883538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.883564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.883718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.883743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.883934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.883962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.884134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.884162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.884337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.884363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.884506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.884536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.884725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.884752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.884905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.884929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.885083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.885131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.885281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.885309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.885510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.885535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.885689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.885717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.885886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.885914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.886083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.886114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.886291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.886320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.886508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.886536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.886707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.886732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.886861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.886904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.887047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.887075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.887263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.887289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.887460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.887488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.887657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.887690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.887862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.887887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.888051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.888079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.888235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.888262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.888441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.888467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.888595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.888620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.888743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.888768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.888891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.888917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.889083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.889128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.555 [2024-07-24 19:06:57.889319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.555 [2024-07-24 19:06:57.889348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.555 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.889523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.889548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.889673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.889698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.889849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.889874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.890086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.890120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.890271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.890299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.890479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.890505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.890661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.890687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.890839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.890865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.891037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.891065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.891231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.891257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.891391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.891417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.891630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.891656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.891780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.891806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.891980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.892005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.892178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.892208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.892373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.892399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.892532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.892574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.892747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.892775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.892951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.892977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.893122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.893150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.893318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.893346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.893488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.893515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.893692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.893717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.893848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.893873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.894041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.894066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.894225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.894254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.894435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.894460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.894634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.894659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.894840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.894865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.894989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.895014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.895192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.895223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.895364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.895392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.895555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.895584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.895726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.895754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.895907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.895947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.896137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.896163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.896290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.556 [2024-07-24 19:06:57.896315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.556 qpair failed and we were unable to recover it. 00:24:20.556 [2024-07-24 19:06:57.896449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.896474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.896624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.896650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.896832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.896858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.896980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.897006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.897130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.897156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.897283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.897310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.897439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.897465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.897621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.897664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.897833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.897857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.898031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.898055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.898208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.898234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.898371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.898396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.898555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.898580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.898745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.898773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.898921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.898948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.899115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.899142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.899295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.899320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.899498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.899523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.899673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.899716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.899850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.899879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.900055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.900080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.900217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.900242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.900368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.900393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.900518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.900544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.900704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.900731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.900871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.900899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.901072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.901097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.901248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.901277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.901416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.901443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.901591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.901615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.901802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.901831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.901993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.902020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.902199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.902225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.902357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.902389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.902543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.902584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.902782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.902807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.902960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.902987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.557 [2024-07-24 19:06:57.903156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.557 [2024-07-24 19:06:57.903185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.557 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.903387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.903412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.903552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.903580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.903748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.903772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.903923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.903949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.904090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.904125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.904299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.904323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.904467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.904493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.904620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.904645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.904795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.904820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.904976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.905002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.905156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.905183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.905368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.905395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.905580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.905604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.905756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.905781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.905960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.905985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.906137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.906163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.906307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.906335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.906516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.906541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.906691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.906715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.906837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.906879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.907040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.907069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.907231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.907256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.907393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.907418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.907548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.907573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.907726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.907751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.907890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.907919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.908076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.908107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.908259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.908284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.908459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.908487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.908656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.908683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.908870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.908895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.558 [2024-07-24 19:06:57.909065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.558 [2024-07-24 19:06:57.909094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.558 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.909276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.909301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.909458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.909483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.909650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.909677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.909858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.909887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.910009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.910034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.910162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.910187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.910404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.910429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.910581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.910608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.910783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.910812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.911015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.911040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.911184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.911210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.911360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.911401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.911606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.911631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.911796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.911820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.911992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.912019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.912164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.912193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.912345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.912371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.912529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.912554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.912758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.912786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.912945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.912974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.913128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.913171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.913298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.913324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.913481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.913506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.913683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.913708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.913870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.913897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.914108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.914134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.914266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.914291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.914446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.914470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.914626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.914651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.914824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.914852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.915001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.915028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.915180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.915207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.915359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.915384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.915591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.915619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.915793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.915819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.915987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.916016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.916183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.916212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.559 [2024-07-24 19:06:57.916362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.559 [2024-07-24 19:06:57.916386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.559 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.916561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.916585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.916799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.916824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.916948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.916973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.917126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.917171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.917361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.917388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.917566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.917595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.917770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.917796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.917938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.917966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.918128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.918153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.918279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.918304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.918459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.918484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.918676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.918700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.918847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.918889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.919077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.919114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.919266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.919295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.919454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.919482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.919686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.919711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.919859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.919885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.920081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.920118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.920296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.920324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.920467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.920493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.920635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.920661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.920843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.920868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.920999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.921024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.921158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.921185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.921336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.921362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.921557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.921582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.921729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.921756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.921918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.921945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.922114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.922140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.922333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.922360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.922489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.922516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.922723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.922749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.922894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.922922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.923052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.923079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.923298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.923325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.923478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.923504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.560 qpair failed and we were unable to recover it. 00:24:20.560 [2024-07-24 19:06:57.923679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.560 [2024-07-24 19:06:57.923707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.923841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.923867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.924058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.924083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.924219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.924246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.924374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.924399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.924545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.924569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.924740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.924767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.924927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.924953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.925099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.925136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.925265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.925291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.925449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.925476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.925670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.925695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.925815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.925839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.925998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.926024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.926191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.926216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.926339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.926364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.926519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.926545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.926678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.926702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.926831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.926857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.926997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.927023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.927222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.927247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.927389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.927415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.927622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.927651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.927831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.927856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.927979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.928020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.928184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.928211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.928353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.928380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.928509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.928535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.928682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.928706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.928846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.928870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.929044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.929073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.929228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.929254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.929409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.929434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.929559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.929585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.929741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.929766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.929917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.929945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.930098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.930131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.930262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.561 [2024-07-24 19:06:57.930288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.561 qpair failed and we were unable to recover it. 00:24:20.561 [2024-07-24 19:06:57.930407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.930432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.930603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.930631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.930754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.930782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.930924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.930949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.931142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.931172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.931319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.931344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.931464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.931489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.931690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.931718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.931875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.931902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.932066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.932091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.932291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.932318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.932485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.932511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.932661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.932686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.932844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.932869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.933012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.933038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.933222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.933248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.933421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.933448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.933632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.933658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.933787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.933811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.933934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.933959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.934132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.934159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.934288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.934314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.934505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.934532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.934664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.934691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.934851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.934876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.935001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.935027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.935210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.935238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.935406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.935432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.935610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.935637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.935801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.935828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.936025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.936050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.936208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.936235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.936395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.936421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.936594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.936619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.936772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.936797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.936925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.936951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.937117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.937143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.562 [2024-07-24 19:06:57.937298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.562 [2024-07-24 19:06:57.937345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.562 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.937475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.937502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.937675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.937699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.937846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.937872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.938030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.938057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.938233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.938259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.938383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.938408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.938554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.938582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.938778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.938803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.938951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.938976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.939167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.939192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.939340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.939365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.939494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.939520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.939668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.939692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.939885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.939911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.940035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.940060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.940275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.940301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.940427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.940452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.940629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.940671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.940800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.940828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.941001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.941025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.941198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.941227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.941364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.941393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.941537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.941562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.941718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.941760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.941938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.941966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.942122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.942148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.942283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.942308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.942431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.942456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.942586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.942611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.942733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.942758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.942910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.942935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.563 [2024-07-24 19:06:57.943124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.563 [2024-07-24 19:06:57.943167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.563 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.943315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.943340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.943504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.943533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.943735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.943760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.943892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.943919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.944078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.944125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.944277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.944303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.944481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.944507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.944653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.944697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.944844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.944869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.944998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.945025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.945201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.945245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.945401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.945426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.945583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.945609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.945759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.945800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.945977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.946002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.946156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.946182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.946389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.946416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.946570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.946596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.946753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.946778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.946995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.947023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.947169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.947194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.947368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.947397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.947578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.947622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.947798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.947823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.947972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.947996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.948124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.948150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.948304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.948329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.948504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.948531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.948689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.948714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.948841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.948866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.949062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.949090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.949242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.949266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.949449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.949473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.949629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.949655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.949807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.949850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.950019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.950044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.950215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.950243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.564 qpair failed and we were unable to recover it. 00:24:20.564 [2024-07-24 19:06:57.950423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.564 [2024-07-24 19:06:57.950451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.950617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.950644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.950797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.950824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.951007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.951034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.951230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.951255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.951422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.951450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.951616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.951646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.951818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.951843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.952009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.952037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.952203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.952232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.952373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.952404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.952556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.952598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.952795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.952819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.952963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.952987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.953153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.953181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.953351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.953379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.953552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.953578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.953728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.953755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.953960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.953988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.954188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.954214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.954410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.954438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.954595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.954622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.954754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.954779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.954936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.954961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.955123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.955149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.955277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.955302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.955458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.955484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.955665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.955694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.955867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.955892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.956091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.956127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.956298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.956324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.956443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.956469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.956665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.956692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.956892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.956917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.957072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.957097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.957274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.957302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.957492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.957521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.957668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.565 [2024-07-24 19:06:57.957693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.565 qpair failed and we were unable to recover it. 00:24:20.565 [2024-07-24 19:06:57.957841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.957884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.958049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.958077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.958261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.958287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.958462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.958505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.958674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.958703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.958873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.958898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.959050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.959076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.959204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.959230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.959380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.959406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.959603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.959630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.959796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.959824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.960000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.960026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.960220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.960254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.960401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.960429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.960596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.960622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.960767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.960814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.960985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.961014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.961180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.961205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.961398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.961426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.961591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.961620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.961826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.961851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.962031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.962059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.962267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.962293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.962411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.962437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.962562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.962588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.962762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.962787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.962940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.962969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.963157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.963183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.963358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.963401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.963571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.963597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.963722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.963765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.963941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.963967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.964136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.964162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.964336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.964365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.964538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.964567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.964710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.964736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.964930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.964959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.965124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.566 [2024-07-24 19:06:57.965154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.566 qpair failed and we were unable to recover it. 00:24:20.566 [2024-07-24 19:06:57.965329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.965354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.965558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.965586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.965745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.965771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.965945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.965970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.966144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.966173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.966305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.966333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.966522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.966548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.966678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.966703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.966827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.966854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.967001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.967026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.967169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.967199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.967340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.967369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.967541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.967567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.967747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.967775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.967939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.967972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.968151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.968178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.968323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.968351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.968541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.968569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.968744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.968769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.968964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.968992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.969153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.969179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.969360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.969386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.969559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.969587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.969727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.969756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.969948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.969973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.970169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.970198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.970378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.970403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.970584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.970610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.970814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.970843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.970989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.971017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.971163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.971189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.971315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.971341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.971493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.971534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.971698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.971723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.971873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.971916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.567 [2024-07-24 19:06:57.972126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.567 [2024-07-24 19:06:57.972152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.567 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.972319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.972344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.972547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.972576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.972765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.972794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.972992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.973018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.973185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.973214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.973395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.973421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.973572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.973597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.973750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.973776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.973919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.973963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.974139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.974175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.974346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.974386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.974535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.974565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.974740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.974766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.974921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.974947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.975099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.975159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.975337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.975362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.975490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.975532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.975703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.975738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.975913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.975943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.976109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.976136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.976331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.976359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.976524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.976550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.976717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.976745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.976897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.976922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.977099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.977134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.977278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.977303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.977490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.977516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.977690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.977715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.977861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.977889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.978058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.978086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.978285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.978311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.978470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.978511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.978708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.568 [2024-07-24 19:06:57.978736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.568 qpair failed and we were unable to recover it. 00:24:20.568 [2024-07-24 19:06:57.978912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.978938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.979063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.979089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.979236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.979261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.979408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.979433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.979637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.979664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.979800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.979828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.979977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.980004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.980137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.980170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.980316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.980340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.980512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.980537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.980710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.980735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.980854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.980879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.981029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.981054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.981231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.981257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.981437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.981463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.981637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.981662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.981858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.981886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.982020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.982065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.982249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.982275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.982418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.982444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.982618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.982661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.982836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.982861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.983054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.983097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.983285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.983313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.983524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.983549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.983678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.983707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.983853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.983878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.984039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.984064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.984268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.984296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.984488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.984514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.984667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.984692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.984821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.984866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.985043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.985070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.985236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.985262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.985397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.985423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.985575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.985616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.985784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.985811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.985946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.569 [2024-07-24 19:06:57.985987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.569 qpair failed and we were unable to recover it. 00:24:20.569 [2024-07-24 19:06:57.986153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.986182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.986343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.986369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.986520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.986545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.986751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.986777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.986902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.986928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.987079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.987128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.987314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.987338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.987458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.987484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.987607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.987647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.987830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.987855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.988009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.988037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.988219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.988245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.988375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.988402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.988559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.988584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.988785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.988813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.988954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.988983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.989183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.989210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.989355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.989389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.989545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.989570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.989698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.989724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.989873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.989898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.990064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.990092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.990276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.990302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.990442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.990469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.990654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.990681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.990856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.990881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.991049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.991077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.991243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.991273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.991399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.991425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.991608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.991635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.991809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.991835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.992010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.992036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.992195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.992222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.992346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.992370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.992521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.992548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.992721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.992749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.992913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.992940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.570 qpair failed and we were unable to recover it. 00:24:20.570 [2024-07-24 19:06:57.993107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-07-24 19:06:57.993135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.993292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.993318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.993510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.993536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.993705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.993729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.993884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.993910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.994063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.994089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.994214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.994238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.994370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.994396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.994536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.994562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.994752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.994777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.994930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.994972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.995156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.995182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.995331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.995357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.995510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.995534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.995661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.995686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.995901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.995926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.996080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.996112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.996248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.996273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.996405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.996431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.996554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.996579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.996757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.996785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.996954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.996979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.997114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.997139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.997286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.997311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.997468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.997494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.997618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.997660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.997839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.997864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.998026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.998054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.998234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.998269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.998420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.998444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.998624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.998653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.998823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.998851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.999031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.999058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.999220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.999247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.999396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.999420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.999602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.999626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.999809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:57.999835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:57.999983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-07-24 19:06:58.000008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.571 qpair failed and we were unable to recover it. 00:24:20.571 [2024-07-24 19:06:58.000170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.000211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.000378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.000403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.000569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.000597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.000729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.000755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.000902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.000928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.001089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.001141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.001281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.001309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.001455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.001481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.001637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.001663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.001819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.001845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.001997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.002022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.002169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.002198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.002356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.002398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.002539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.002565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.002723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.002766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.002897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.002923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.003067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.003092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.003258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.003284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.003476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.003503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.003702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.003729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.003899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.003925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.004081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.004114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.004287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.004312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.004443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.004485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.004629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.004654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.004810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.004836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.004975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.005002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.005176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.005201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.005357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.005382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.005570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.005598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.005792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.005819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.005982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.572 [2024-07-24 19:06:58.006008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.572 qpair failed and we were unable to recover it. 00:24:20.572 [2024-07-24 19:06:58.006160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.006189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.006342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.006367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.006491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.006516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.006664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.006689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.006822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.006848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.006991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.007016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.007221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.007248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.007404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.007431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.007624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.007649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.007791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.007818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.008003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.008029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.008201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.008228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.008359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.008403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.008572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.008600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.008755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.008780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.008932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.008957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.009113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.009153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.009306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.009332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.009461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.009486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.009643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.009670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.009856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.009881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.010053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.010082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.010260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.010286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.010435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.010460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.010629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.010658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.010803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.010828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.010998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.011026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.011237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.011263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.011431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.011459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.011640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.011666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.011792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.011818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.011996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.012025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.012225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.012253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.012403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.012432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.012617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.012646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.012826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.012852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.013020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.013048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.013186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.573 [2024-07-24 19:06:58.013219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.573 qpair failed and we were unable to recover it. 00:24:20.573 [2024-07-24 19:06:58.013383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.013409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.013534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.013575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.013739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.013773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.013948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.013974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.014129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.014156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.014308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.014333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.014511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.014537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.014708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.014736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.014901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.014929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.015077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.015121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.015303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.015332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.015500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.015529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.015682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.015707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.015842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.015867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.015991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.016017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.016141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.016168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.016352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.016377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.016575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.016604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.016747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.016774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.016908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.016933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.017118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.017151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.017301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.017328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.017458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.017484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.017616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.017642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.017797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.017823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.017970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.017999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.018164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.018193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.018370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.018396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.018549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.018592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.018788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.018817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.019047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.019076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.019264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.019290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.019441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.019466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.019619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.019645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.019818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.019846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.020010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.020038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.020187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.020214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.574 [2024-07-24 19:06:58.020357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.574 [2024-07-24 19:06:58.020383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.574 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.020543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.020587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.020783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.020809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.020964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.020989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.021140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.021181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.021352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.021381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.021574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.021603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.021742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.021771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.021943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.021968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.022134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.022162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.022309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.022338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.022531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.022557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.022708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.022733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.022914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.022942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.023092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.023135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.023264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.023289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.023462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.023486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.023611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.023636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.023828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.023856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.024021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.024048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.024220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.024246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.024394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.024438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.024579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.024608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.024805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.024831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.024997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.025024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.025200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.025226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.025370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.025395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.025510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.025535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.025685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.025710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.025877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.025903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.026069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.026097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.026280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.026305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.026454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.026484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.026653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.026681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.026813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.026842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.027034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.027059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.027208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.027238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.027434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.575 [2024-07-24 19:06:58.027462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.575 qpair failed and we were unable to recover it. 00:24:20.575 [2024-07-24 19:06:58.027609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.027634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.027827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.027854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.027997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.028026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.028184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.028211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.028362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.028387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.028558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.028582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.028711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.028736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.028928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.028955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.029136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.029161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.029337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.029362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.029505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.029534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.029695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.029722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.029884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.029909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.030058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.030098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.030275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.030302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.030498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.030523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.030716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.030743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.030905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.030932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.031084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.031127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.031306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.031332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.031499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.031524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.031681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.031707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.031905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.031932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.032083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.032117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.032282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.032308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.032456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.032497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.032653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.032680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.032826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.032852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.033026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.033052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.033221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.033247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.033402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.033427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.033578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.033604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.033790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.033816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.033944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.033969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.034135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.034166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.034321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.034347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.576 [2024-07-24 19:06:58.034513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.576 [2024-07-24 19:06:58.034538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.576 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.034701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.034727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.034882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.034909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.035070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.035095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.035269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.035294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.035448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.035475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.035626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.035651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.035777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.035802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.035954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.035980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.036158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.036184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.036316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.036340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.036492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.036518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.036674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.036699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.036849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.036874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.037003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.037030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.037188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.037214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.037389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.037414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.037567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.037593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.037763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.037789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.037965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.037990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.038144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.038169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.038295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.038320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.038498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.038523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.038647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.038673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.038831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.038856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.039007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.039032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.039182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.039208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.039363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.039389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.039570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.039595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.039744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.039769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.039926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.039952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.040124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.577 [2024-07-24 19:06:58.040160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.577 qpair failed and we were unable to recover it. 00:24:20.577 [2024-07-24 19:06:58.040334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.040360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.040514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.040539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.040666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.040691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.040843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.040868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.041002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.041027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.041174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.041201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.041349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.041378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.041555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.041580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.041703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.041728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.041851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.041877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.042026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.042051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.042190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.042216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.042343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.042368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.042541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.042565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.042693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.042719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.042843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.042868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.043043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.043068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.043218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.043245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.043394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.043420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.043552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.043576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.043710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.043736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.043893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.043918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.044073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.044097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.044241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.044267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.044389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.044414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.044543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.044569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.044702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.044727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.044901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.044926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.045049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.045075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.045208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.045234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.045385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.045409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.045559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.045584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.045736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.045762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.045894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.045919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.046043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.046068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.046246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.046272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.046427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.046454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.046610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.046635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.046785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.578 [2024-07-24 19:06:58.046809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.578 qpair failed and we were unable to recover it. 00:24:20.578 [2024-07-24 19:06:58.046962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.046987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.047143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.047169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.047298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.047323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.047496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.047521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.047648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.047674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.047820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.047844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.047995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.048022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.048151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.048181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.048314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.048339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.048466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.048490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.048638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.048664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.048814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.048838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.049029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.049054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.049179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.049205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.049335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.049360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.049512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.049536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.049709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.049734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.049858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.049882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.050028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.050052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.050206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.050233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.050360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.050385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.050563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.050588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.050734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.050759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.050894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.050920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.051046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.051071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.051233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.051260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.051409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.051434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.051592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.051616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.051746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.051773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.051923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.051948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.052098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.052130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.052284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.052311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.052436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.052461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.052595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.052619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.052759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.052785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.052937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.052962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.053118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.053144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.053302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.053328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.053462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.053488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.053646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.053670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.053842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.579 [2024-07-24 19:06:58.053868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.579 qpair failed and we were unable to recover it. 00:24:20.579 [2024-07-24 19:06:58.054007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.054033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.054195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.054221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.054379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.054404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.054579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.054605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.054752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.054778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.054929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.054954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.055093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.055147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.055303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.055329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.055459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.055484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.055633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.055659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.055783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.055810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.055988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.056013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.056141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.056167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.056299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.056325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.056501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.056526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.056657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.056682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.056804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.056830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.057003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.057028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.057171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.057197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.057372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.057397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.057526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.057552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.057684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.057709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.057858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.057883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.058036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.058061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.058186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.058211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.058357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.058383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.058554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.058580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.058713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.058738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.058888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.058913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.059085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.059118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.059272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.059297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.059428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.059453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.059578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.059603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.059742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.059768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.059917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.059942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.060090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.060123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.060294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.060319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.060472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.060496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.060647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.060671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.060824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.580 [2024-07-24 19:06:58.060849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.580 qpair failed and we were unable to recover it. 00:24:20.580 [2024-07-24 19:06:58.061000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.061025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.061173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.061198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.061327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.061352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.061475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.061502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.061638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.061663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.061819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.061844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.062021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.062050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.062203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.062229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.062402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.062427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.062570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.062595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.062774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.062800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.062928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.062952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.063076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.063113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.063294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.063320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.063457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.063481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.063632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.063656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.063811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.063837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.063966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.063992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.064149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.064175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.064315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.064346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.064510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.064536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.064693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.064718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.064842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.064868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.065023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.065048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.065229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.065255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.065408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.065435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.065631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.065657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.065806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.065831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.065985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.066010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.066137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.066166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.066323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.066348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.066504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.066529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.066719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.066744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.066881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.066905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.067039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.581 [2024-07-24 19:06:58.067065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.581 qpair failed and we were unable to recover it. 00:24:20.581 [2024-07-24 19:06:58.067237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.067263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.067396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.067421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.067549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.067576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.067725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.067750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.067896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.067921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.068051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.068077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.068245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.068272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.068422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.068448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.068623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.068648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.068822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.068847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.069004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.069030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.069175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.069206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.069326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.069352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.069527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.069552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.069694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.069719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.069847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.069872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.070023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.070048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.070227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.070253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.070400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.070425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.070583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.070608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.070763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.070789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.070915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.070941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.071093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.071126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.071255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.071280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.071434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.071459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.071622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.071648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.071798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.071823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.071963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.071989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.072142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.072168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.072326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.072352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.072523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.072548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.072677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.072704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.072835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.072861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.073031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.073056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.073233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.073259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.073389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.073415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.073568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.073593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.073727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.073753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.073907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.073933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.074113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.074139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.582 qpair failed and we were unable to recover it. 00:24:20.582 [2024-07-24 19:06:58.074258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.582 [2024-07-24 19:06:58.074284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.074413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.074439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.074580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.074605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.074759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.074785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.074910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.074935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.075114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.075141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.075291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.075316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.075504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.075529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.075681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.075708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.075868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.075894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.076019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.076046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.076203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.076234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.076389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.076415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.076564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.076589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.076741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.076766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.076919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.076944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.077098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.077130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.077266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.077292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.077444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.077470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.077602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.077627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.077753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.077778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.077910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.077935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.078088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.078122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.078253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.078280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.078438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.078464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.078598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.078623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.078759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.078784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.078907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.078932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.079076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.079109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.079266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.079292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.079440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.079465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.079617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.079642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.079800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.079825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.079970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.079994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.080126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.080153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.080277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.080302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.080430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.080456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.080605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.080631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.080764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.080790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.080943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.583 [2024-07-24 19:06:58.080969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.583 qpair failed and we were unable to recover it. 00:24:20.583 [2024-07-24 19:06:58.081126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.081152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.081334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.081359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.081491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.081516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.081647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.081672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.081794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.081820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.081940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.081966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.082121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.082147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.082302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.082327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.082482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.082507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.082634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.082659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.082811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.082837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.082976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.083006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.083154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.083180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.083301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.083327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.083452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.083477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.083623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.083647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.083777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.083804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.083956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.083981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.084114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.084139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.084288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.084313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.084461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.084486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.084617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.084642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.084769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.084797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.084927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.084952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.085075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.085100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.085292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.085317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.085437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.085464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.085603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.085629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.085756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.085780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.085907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.085932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.086085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.086119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.086245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.086270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.086412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.086438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.086587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.086612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.086733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.086758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.086882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.086908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.087082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.087114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.584 qpair failed and we were unable to recover it. 00:24:20.584 [2024-07-24 19:06:58.087238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.584 [2024-07-24 19:06:58.087263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.087410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.087450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.087609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.087635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.087756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.087782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.087911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.087936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.088090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.088126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.088253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.088278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.088403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.088428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.088551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.088576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.088708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.088733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.088849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.088874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.089022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.089047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.089181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.089207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.089378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.089403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.089520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.089545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.089670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.089695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.089844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.089868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.090022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.090047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.090172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.090198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.090323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.090348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.090471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.090497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.090648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.090673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.090830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.090855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.090985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.091010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.091142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.091168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.091294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.091320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.091438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.091463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.091615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.091640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.091775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.091806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.091928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.091953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.092100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.092131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.092254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.092278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.092410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.092435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.092586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.092611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.092735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.092760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.092899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.092924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.093099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.093130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.093261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.093286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.093413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.093439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.093563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.093588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.093717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.093742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.093865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.093890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.094046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.094071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.094224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.585 [2024-07-24 19:06:58.094250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.585 qpair failed and we were unable to recover it. 00:24:20.585 [2024-07-24 19:06:58.094425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.094450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.094596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.094620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.094769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.094794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.094945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.094970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.095097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.095129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.095280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.095305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.095435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.095460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.095592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.095618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.095744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.095769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.095925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.095949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.096110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.096136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.096261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.096291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.096438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.096463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.096586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.096610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.096749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.096773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.096922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.096946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.097107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.097133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.097265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.097290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.097449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.097474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.097621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.097646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.097795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.097820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.097949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.097975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.098112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.098138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.098264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.098289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.586 qpair failed and we were unable to recover it. 00:24:20.586 [2024-07-24 19:06:58.098421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.586 [2024-07-24 19:06:58.098446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.098567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.098592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.098740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.098765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.098915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.098940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.099073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.099098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.099255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.099281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.099407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.099432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.099560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.099586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.099716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.099741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.099867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.099893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.100034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.100059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.100187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.100213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.100360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.100385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.100513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.100538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.100669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.100698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.100851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.100876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.101003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.101028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.101161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.101187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.101311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.101336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.101467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.101492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.101616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.101641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.101791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.101816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.101970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.101994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.102136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.102162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.102309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.102334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.102459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.102486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.102638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.102663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.102811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.102836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.103019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.103045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.103171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.103197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.103353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.103378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.103515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.103540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.103684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.103709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.103833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.103858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.103987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.104011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.587 [2024-07-24 19:06:58.104153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.587 [2024-07-24 19:06:58.104179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.587 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.104333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.104358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.104516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.104541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.104689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.104714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.104835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.104860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.104979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.105005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.105157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.105183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.105336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.105361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.105484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.105509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.105632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.105657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.105811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.105836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.105986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.106011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.106155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.106181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.106331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.106355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.106480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.106505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.106672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.106696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.106819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.106843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.106992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.107017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.107144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.107169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.107293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.107318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.107471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.107511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.107639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.107666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.107794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.107820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.107954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.107980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.108116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.108144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.108273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.108300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.108437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.108464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.108610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.108637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.108793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.108819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.108954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.108982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.109116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.109142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.109264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.109290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.109414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.109439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.109613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.109638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.109766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.109792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.109922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.109947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.110129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.110155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.110309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.588 [2024-07-24 19:06:58.110335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.588 qpair failed and we were unable to recover it. 00:24:20.588 [2024-07-24 19:06:58.110476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.589 [2024-07-24 19:06:58.110501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.589 qpair failed and we were unable to recover it. 00:24:20.589 [2024-07-24 19:06:58.110633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.589 [2024-07-24 19:06:58.110658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.589 qpair failed and we were unable to recover it. 00:24:20.589 [2024-07-24 19:06:58.110789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.589 [2024-07-24 19:06:58.110814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.589 qpair failed and we were unable to recover it. 00:24:20.589 [2024-07-24 19:06:58.110935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.589 [2024-07-24 19:06:58.110959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.589 qpair failed and we were unable to recover it. 00:24:20.589 [2024-07-24 19:06:58.111085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.589 [2024-07-24 19:06:58.111115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.589 qpair failed and we were unable to recover it. 00:24:20.589 [2024-07-24 19:06:58.111245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.589 [2024-07-24 19:06:58.111270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.589 qpair failed and we were unable to recover it. 00:24:20.589 [2024-07-24 19:06:58.111420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.589 [2024-07-24 19:06:58.111446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.589 qpair failed and we were unable to recover it. 00:24:20.589 [2024-07-24 19:06:58.111599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.589 [2024-07-24 19:06:58.111624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.589 qpair failed and we were unable to recover it. 00:24:20.589 [2024-07-24 19:06:58.111781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.589 [2024-07-24 19:06:58.111806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.589 qpair failed and we were unable to recover it. 00:24:20.589 [2024-07-24 19:06:58.111938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.589 [2024-07-24 19:06:58.111967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.589 qpair failed and we were unable to recover it. 00:24:20.589 [2024-07-24 19:06:58.112110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.589 [2024-07-24 19:06:58.112137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.589 qpair failed and we were unable to recover it. 00:24:20.589 [2024-07-24 19:06:58.112296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.589 [2024-07-24 19:06:58.112322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.589 qpair failed and we were unable to recover it. 00:24:20.589 [2024-07-24 19:06:58.112481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.589 [2024-07-24 19:06:58.112507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.589 qpair failed and we were unable to recover it. 00:24:20.873 [2024-07-24 19:06:58.112659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.873 [2024-07-24 19:06:58.112685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.873 qpair failed and we were unable to recover it. 00:24:20.873 [2024-07-24 19:06:58.112837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.873 [2024-07-24 19:06:58.112863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.873 qpair failed and we were unable to recover it. 00:24:20.873 [2024-07-24 19:06:58.112997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.873 [2024-07-24 19:06:58.113024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.873 qpair failed and we were unable to recover it. 00:24:20.873 [2024-07-24 19:06:58.113154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.873 [2024-07-24 19:06:58.113180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.873 qpair failed and we were unable to recover it. 00:24:20.873 [2024-07-24 19:06:58.113316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.873 [2024-07-24 19:06:58.113341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.873 qpair failed and we were unable to recover it. 00:24:20.873 [2024-07-24 19:06:58.113466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.113491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.113609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.113634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.113778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.113803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.113950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.113975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.114099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.114129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.114259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.114286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.114435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.114460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.114584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.114609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.114750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.114775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.114920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.114945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.115090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.115120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.115249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.115274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.115398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.115424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.115590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.115615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.115766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.115791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.115917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.115946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.116079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.116111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.116241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.116267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.116399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.116426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.116579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.116604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.116755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.116781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.116903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.116927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.117049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.117074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.117265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.117290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.117412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.117437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.117569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.117594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.117720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.117745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.117898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.117923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.118050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.118074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.118227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.118252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.118377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.118402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.118552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.118577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.118697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.118722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.118846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.118871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.118993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.119018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.119149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.119174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.119298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.119323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.119496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.874 [2024-07-24 19:06:58.119521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.874 qpair failed and we were unable to recover it. 00:24:20.874 [2024-07-24 19:06:58.119665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.119690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.119849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.119873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.120019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.120044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.120194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.120220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.120366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.120391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.120515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.120539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.120709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.120734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.120880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.120909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.121054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.121078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.121206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.121231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.121371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.121396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.121546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.121571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.121744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.121769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.121916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.121941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.122073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.122098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.122242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.122269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.122421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.122446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.122569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.122594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.122718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.122743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.122891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.122915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.123062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.123095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.123259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.123284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.123434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.123459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.123624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.123649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.123823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.123848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.124003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.124029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.124159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.124184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.124334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.124359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.124484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.124509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.124662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.124687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.124833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.124858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.125010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.125035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.125188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.125213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.125335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.125360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.125532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.125561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.125683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.125707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.125831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.125858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.875 [2024-07-24 19:06:58.125987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.875 [2024-07-24 19:06:58.126012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.875 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.126172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.126198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.126345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.126378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.126497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.126522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.126654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.126679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.126831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.126856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.127004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.127029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.127163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.127188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.127315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.127340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.127469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.127494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.127622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.127647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.127807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.127833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.127987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.128012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.128182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.128209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.128335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.128360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.128491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.128517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.128673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.128699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.128821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.128847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.128977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.129002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.129131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.129156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.129274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.129299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.129427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.129452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.129621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.129647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.129776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.129801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.129954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.129983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.130140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.130167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.130286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.130311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.130483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.130508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.130660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.130685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.130801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.130826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.130951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.130976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.131124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.131161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.131299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.131325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.131456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.131481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.131628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.131653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.131776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.131801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.131949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.131974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.132127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.132154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.876 [2024-07-24 19:06:58.132285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.876 [2024-07-24 19:06:58.132311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.876 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.132464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.132489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.132612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.132637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.132768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.132793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.132940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.132965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.133093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.133124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.133251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.133276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.133402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.133427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.133581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.133607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.133763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.133802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.133966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.133993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.134124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.134149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.134272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.134299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.134449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.134474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.134636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.134661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.134811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.134837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.134967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.134993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.135123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.135149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.135302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.135326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.135461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.135487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.135620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.135652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.135804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.135831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.135987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.136013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.136191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.136217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.136347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.136371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.136510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.136537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.136691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.136716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.136850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.136876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.137007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.137034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.137168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.137194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.137323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.137348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.137508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.137535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.137690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.137715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.137843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.137870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.138000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.138025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.138154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.138179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.138305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.138331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.138485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.138510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.877 [2024-07-24 19:06:58.138663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.877 [2024-07-24 19:06:58.138688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.877 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.138843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.138869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.138986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.139011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.139148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.139175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.139300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.139326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.139458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.139482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.139612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.139637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.139764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.139789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.139911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.139936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.140063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.140089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.140255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.140280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.140413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.140438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.140613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.140639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.140758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.140783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.140932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.140957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.141091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.141124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.141273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.141312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.141442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.141469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.141596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.141621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.141751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.141777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.141951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.141977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.142122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.142147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.142272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.142299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.142451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.142476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.142604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.142629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.142754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.142779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.142931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.142956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.143083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.143112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.143290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.143315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.143439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.143464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.143599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.878 [2024-07-24 19:06:58.143624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.878 qpair failed and we were unable to recover it. 00:24:20.878 [2024-07-24 19:06:58.143754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.143779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.143954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.143979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.144111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.144159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.144285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.144311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.144470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.144495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.144644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.144669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.144813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.144838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.144966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.144991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.145166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.145192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.145348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.145373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.145503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.145528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.145656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.145683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.145810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.145839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.145969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.145994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.146130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.146168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.146318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.146343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.146492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.146517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.146647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.146673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.146827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.146853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.147015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.147041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.147175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.147202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.147329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.147354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.147480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.147506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.147657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.147684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.147839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.147864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.148017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.148048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.148178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.148204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.148329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.148355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.148484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.148510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.148661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.148686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.148815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.148841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.148975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.149000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.149163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.149191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.149326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.149351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.149480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.149505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.149637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.149662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.149811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.149838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.149959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.879 [2024-07-24 19:06:58.149985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.879 qpair failed and we were unable to recover it. 00:24:20.879 [2024-07-24 19:06:58.150133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.150162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.150318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.150344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.150504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.150530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.150653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.150679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.150828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.150854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.151027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.151053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.151213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.151241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.151370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.151396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.151525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.151551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.151722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.151748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.151896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.151922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.152097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.152129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.152285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.152311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.152464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.152489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.152648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.152674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.152818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.152844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.153019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.153044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.153200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.153226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.153405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.153430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.153588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.153613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.153736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.153761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.153879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.153905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.154078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.154110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.154260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.154286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.154408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.154434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.154583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.154608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.154762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.154787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.154957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.154988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.155145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.155171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.155352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.155378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.155512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.155538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.155692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.155717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.155838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.155863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.156016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.156042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.156190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.156217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.156376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.156402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.880 [2024-07-24 19:06:58.156583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.880 [2024-07-24 19:06:58.156608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.880 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.156765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.156791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.156912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.156938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.157115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.157142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.157295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.157321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.157490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.157515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.157662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.157687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.157854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.157880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.158031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.158057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.158190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.158216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.158372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.158398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.158515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.158540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.158655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.158680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.158832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.158858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.158989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.159014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.159146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.159173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.159295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.159321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.159495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.159520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.159679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.159705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.159836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.159862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.159983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.160009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.160192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.160218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.160387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.160412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.160566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.160591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.160764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.160789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.160918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.160943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.161092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.161131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.161270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.161296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.161425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.161451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.161584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.161609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.161734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.161760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.161903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.161928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.162050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.162076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.162233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.162259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.162414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.162439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.162587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.162613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.162740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.162765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.162913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.881 [2024-07-24 19:06:58.162939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.881 qpair failed and we were unable to recover it. 00:24:20.881 [2024-07-24 19:06:58.163116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.163142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.163296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.163321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.163448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.163473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.163596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.163621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.163748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.163776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.163921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.163946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.164095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.164137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.164295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.164321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.164477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.164502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.164658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.164683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.164801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.164828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.164978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.165003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.165151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.165178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.165307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.165332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.165489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.165515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.165638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.165663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.165791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.165818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.165951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.165976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.166153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.166179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.166309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.166335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.166513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.166543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.166677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.166703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.166854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.166880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.167029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.167055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.167206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.167232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.167383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.167409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.167541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.167566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.167686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.167711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.167886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.167912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.168035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.168062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.168193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.168221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.168349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.168375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.168530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.168556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.168727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.168752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.168884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.168911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.169062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.169087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.882 [2024-07-24 19:06:58.169224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.882 [2024-07-24 19:06:58.169250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.882 qpair failed and we were unable to recover it. 00:24:20.883 [2024-07-24 19:06:58.169396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.883 [2024-07-24 19:06:58.169421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.883 qpair failed and we were unable to recover it. 00:24:20.883 [2024-07-24 19:06:58.169559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.883 [2024-07-24 19:06:58.169584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.883 qpair failed and we were unable to recover it. 00:24:20.883 [2024-07-24 19:06:58.169710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.883 [2024-07-24 19:06:58.169737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.883 qpair failed and we were unable to recover it. 00:24:20.883 [2024-07-24 19:06:58.169865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.883 [2024-07-24 19:06:58.169890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.883 qpair failed and we were unable to recover it. 00:24:20.883 [2024-07-24 19:06:58.170043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.170070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.170221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.170247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.170425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.170450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.170586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.170612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.170757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.170784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.170919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.170946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.171130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.171156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.171282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.171309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.171459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.171484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.171642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.171667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.171797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.171822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.171946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.171972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.172161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.172187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.172316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.172342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.172464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.172491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.172618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.172644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.172795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.172820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.172946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.172972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.173123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.173149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.173272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.173307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.173472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.173498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.173653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.173679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.173824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.173849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.173981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.174006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.174179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.174206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.174351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.174375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.174500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.174526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.174699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.174725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.174869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.174894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.175046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.175072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.175220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.884 [2024-07-24 19:06:58.175246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.884 qpair failed and we were unable to recover it. 00:24:20.884 [2024-07-24 19:06:58.175383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.175409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.175567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.175592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.175749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.175774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.175949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.175974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.176122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.176148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.176300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.176326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.176478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.176503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.176632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.176658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.176810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.176835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.176989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.177014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.177144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.177170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.177320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.177346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.177477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.177503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.177664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.177689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.177823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.177848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.177989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.178015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.178134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.178160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.178294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.178319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.178444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.178469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.178623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.178648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.178822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.178847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.179002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.179028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.179180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.179205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.179361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.179387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.179540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.179565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.179750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.179775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.179906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.179932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.180082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.180114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.180248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.180278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.180404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.180430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.180579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.180604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.180754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.180779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.180927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.180953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.181114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.181140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.181316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.181342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.181472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.181498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.181648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.885 [2024-07-24 19:06:58.181673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.885 qpair failed and we were unable to recover it. 00:24:20.885 [2024-07-24 19:06:58.181821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.181847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.181973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.181998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.182148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.182175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.182308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.182333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.182482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.182508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.182660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.182685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.182831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.182858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.182983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.183010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.183160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.183186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.183307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.183333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.183488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.183515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.183682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.183707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.183842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.183869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.184006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.184032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.184213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.184239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.184359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.184396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.184551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.184576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.184746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.184771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.184927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.184952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.185131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.185167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.185325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.185351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.185477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.185502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.185637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.185666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.185788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.185814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.185964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.185990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.186122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.186148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.186304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.186329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.186478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.186505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.186628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.186654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.186780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.186807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.186994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.187020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.187148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.187179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.187317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.187342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.187499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.187525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.187704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.187730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.187882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.187908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.886 [2024-07-24 19:06:58.188037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.886 [2024-07-24 19:06:58.188063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.886 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.188196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.188223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.188374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.188400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.189136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.189166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.189333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.189361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.189493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.189530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.189685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.189711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.189866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.189893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.190076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.190109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.190240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.190266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.190426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.190453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.190609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.190635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.190787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.190814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.190963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.190989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.191140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.191167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.191304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.191330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.191458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.191484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.191605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.191632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.191791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.191816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.192161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.192190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.192327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.192353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.192496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.192521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.192705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.192731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.192884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.192912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.193079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.193113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.193249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.193276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.193428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.193454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.193581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.193607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.193741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.193769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.193923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.193949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.194077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.194109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.194238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.194264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.194414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.194440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.194570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.194596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.194744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.194771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.194949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.194980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.195111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.195139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.195267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.887 [2024-07-24 19:06:58.195292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.887 qpair failed and we were unable to recover it. 00:24:20.887 [2024-07-24 19:06:58.195420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.195446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.195595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.195620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.195766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.195792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.195948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.195973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.196096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.196128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.196280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.196305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.196435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.196461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.196610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.196635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.196787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.196812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.196962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.196988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.197140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.197166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.197292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.197318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.197495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.197520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.197673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.197698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.197844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.197869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.198042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.198068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.198240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.198266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.198400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.198426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.198577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.198604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.198737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.198763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.198914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.198940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.199090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.199128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.199258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.199284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.199426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.199452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.199633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.199658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.199813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.199839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.199970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.199995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.200152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.200177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.200332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.200359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.200516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.200541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.200683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.200708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.888 [2024-07-24 19:06:58.200859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.888 [2024-07-24 19:06:58.200886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.888 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.201007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.201034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.201215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.201241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.201371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.201397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.201573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.201599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.201754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.201779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.201935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.201964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.202121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.202148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.202279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.202305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.202456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.202481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.202661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.202686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.202832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.202858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.202986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.203012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.203149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.203175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.203305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.203330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.203482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.203509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.203637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.203662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.203811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.203837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.203989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.204014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.204167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.204193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.204324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.204349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.204498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.204524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.204673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.204698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.204829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.204854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.205009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.205035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.205197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.205224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.205379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.205405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.205531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.205557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.205685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.205710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.205863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.205888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.206011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.206037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.206182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.206209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.206338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.206363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.206497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.206522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.206676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.206703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.206851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.206877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.207025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.207050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.207188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.889 [2024-07-24 19:06:58.207214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.889 qpair failed and we were unable to recover it. 00:24:20.889 [2024-07-24 19:06:58.207345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.207370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.207495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.207520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.207643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.207669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.207818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.207844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.208017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.208043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.208163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.208188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.208339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.208366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.208493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.208518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.208670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.208699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.208875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.208901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.209049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.209075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.209201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.209227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.209349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.209376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.209553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.209578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.209730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.209756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.209909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.209934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.210082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.210117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.210246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.210272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.210396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.210422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.210544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.210569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.210693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.210718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.210869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.210895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.211050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.211076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.211235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.211262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.211414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.211439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.211586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.211612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.211764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.211789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.211918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.211945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.212094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.212126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.212283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.212308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.212432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.212457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.212582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.212607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.212767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.212793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.212942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.212968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.213125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.213151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.213285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.213312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.213436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.213461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.890 [2024-07-24 19:06:58.213615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.890 [2024-07-24 19:06:58.213642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.890 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.213798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.213824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.213979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.214004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.214135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.214162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.214296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.214323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.214448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.214476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.214635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.214661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.214813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.214839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.214993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.215018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.215169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.215195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.215344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.215371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.215523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.215552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.215711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.215737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.215868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.215895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.216026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.216052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.216177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.216204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.216364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.216390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.216547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.216573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.216722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.216748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.216920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.216945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.217099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.217130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.217282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.217307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.217467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.217493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.217632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.217657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.217831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.217856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.217979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.218005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.218163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.218189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.218313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.218338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.218466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.218491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.218651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.218677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.218833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.218858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.218976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.219002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.219143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.219169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.219297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.219322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.219463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.219488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.219615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.219641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.219791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.219816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.219972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.219997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.891 qpair failed and we were unable to recover it. 00:24:20.891 [2024-07-24 19:06:58.220134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.891 [2024-07-24 19:06:58.220160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.220293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.220319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.220480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.220506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.220683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.220708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.220841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.220867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.220987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.221013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.221144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.221170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.221298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.221325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.221455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.221482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.221662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.221688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.221837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.221864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.222006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.222031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.222195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.222221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.222354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.222383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.222538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.222564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.222719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.222745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.222924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.222949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.223086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.223117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.223250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.223277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.223404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.223429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.223608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.223633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.223782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.223808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.223999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.224023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.224174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.224200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.224347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.224372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.224500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.224525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.224684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.224711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.224897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.224922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.225073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.225098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.225261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.225287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.225415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.225441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.225615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.225640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.225815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.225841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.225989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.226014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.226146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.226172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.226314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.226340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.226487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.226512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.892 [2024-07-24 19:06:58.226690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.892 [2024-07-24 19:06:58.226715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.892 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.226866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.226891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.227043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.227068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.227204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.227232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.227383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.227408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.227539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.227564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.227713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.227738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.227892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.227919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.228095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.228126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.228278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.228304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.228427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.228454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.228582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.228608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.228758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.228783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.228933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.228958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.229093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.229124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.229298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.229324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.229453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.229483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.229609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.229634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.229758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.229783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.229936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.229963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.230138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.230165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.230315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.230342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.230491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.230517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.230670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.230695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.230869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.230894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.231018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.231043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.231192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.231219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.231393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.231419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.231567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.231592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.231721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.231746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.231909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.231935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.232087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.232117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.893 [2024-07-24 19:06:58.232246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.893 [2024-07-24 19:06:58.232272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.893 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.232401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.232428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.232584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.232609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.232758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.232782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.232900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.232926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.233049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.233075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.233205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.233231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.233401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.233427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.233613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.233639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.233789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.233815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.233991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.234017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.234143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.234170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.234344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.234369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.234521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.234547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.234703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.234728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.234846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.234872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.235024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.235050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.235211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.235237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.235388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.235415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.235554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.235580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.235730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.235755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.235900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.235925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.236048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.236073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.236216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.236243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.236401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.236431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.236587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.236613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.236788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.236813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.236944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.236969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.237122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.237148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.237296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.237321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.237447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.237473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.237644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.237669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.237823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.237849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.238023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.238049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.238198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.238223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.238394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.238419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.238547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.238572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.238703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.238728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.894 [2024-07-24 19:06:58.238891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.894 [2024-07-24 19:06:58.238917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.894 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.239063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.239088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.239239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.239265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.239413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.239439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.239591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.239616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.239783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.239809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.239933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.239958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.240134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.240161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.240289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.240314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.240461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.240486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.240599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.240624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.240774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.240800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.240928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.240954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.241132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.241158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.241285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.241310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.241432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.241458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.241607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.241633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.241751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.241778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.241898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.241923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.242074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.242100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.242238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.242264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.242392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.242418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.242541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.242566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.242721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.242747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.242899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.242925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.243087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.243117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.243245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.243275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.243426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.243451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.243576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.243601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.243776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.243801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.243954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.243979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.244127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.244153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.244313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.244339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.244485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.244511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.244655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.244680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.244834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.244860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.245009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.895 [2024-07-24 19:06:58.245035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.895 qpair failed and we were unable to recover it. 00:24:20.895 [2024-07-24 19:06:58.245185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.245212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.245363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.245388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.245515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.245541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.245720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.245745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.245900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.245925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.246059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.246084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.246214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.246240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.246390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.246415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.246567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.246592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.246750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.246776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.246910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.246936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.247087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.247117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.247272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.247298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.247425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.247450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.247616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.247641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.247774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.247800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.247957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.247983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.248115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.248151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.248289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.248314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.248444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.248470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.248604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.248630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.248791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.248817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.248991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.249016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.249170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.249196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.249325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.249350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.249469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.249494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.249615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.249641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.249813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.249838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.249983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.250009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.250137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.250167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.250340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.250365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.250515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.250540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.250716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.250742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.896 [2024-07-24 19:06:58.250894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.896 [2024-07-24 19:06:58.250920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.896 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.251081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.251110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.251293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.251318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.251444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.251470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.251622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.251648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.251774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.251799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.251948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.251974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.252152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.252177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.252331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.252357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.252490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.252515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.252670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.252695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.252842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.252867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.253000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.253025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.253179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.253205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.253337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.253362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.253552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.253578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.253731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.253756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.253887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.253912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.254070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.254095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.254289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.254315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.254440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.254466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.254589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.254614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.254764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.254789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.254914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.254943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.255097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.255128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.255253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.255286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.255446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.255472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.255591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.255617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.255774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.255800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.255945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.255972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.256100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.256131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.897 [2024-07-24 19:06:58.256257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.897 [2024-07-24 19:06:58.256282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.897 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.256437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.256464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.256586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.256611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.256768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.256794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.256947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.256973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.257125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.257151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.257304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.257329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.257458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.257484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.257664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.257690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.257837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.257864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.258023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.258048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.258202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.258228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.258383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.258409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.258539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.258564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.258689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.258715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.258848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.258874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.259018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.259044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.259175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.259201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.259353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.259378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.259560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.259587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.259718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.259743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.259900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.259926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.260053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.260078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.260205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.260229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.260404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.260429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.260548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.260573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.260699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.260726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.260879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.260904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.261063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.261089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.261228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.898 [2024-07-24 19:06:58.261255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.898 qpair failed and we were unable to recover it. 00:24:20.898 [2024-07-24 19:06:58.261386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.261411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.261585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.261610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.261737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.261766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.261887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.261912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.262067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.262093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.262260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.262286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.262434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.262460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.262614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.262640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.262794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.262819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.262944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.262972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.263125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.263151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.263275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.263301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.263459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.263485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.263657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.263682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.263806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.263832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.263986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.264011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.264152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.264179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.264336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.264363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.264543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.264568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.264722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.264748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.264902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.264928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.265078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.265110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.265241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.265267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.265424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.265449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.265579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.265605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.265760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.265787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.265954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.265979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.266162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.266188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.266322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.266349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.266542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.266567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.266719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.266746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.266916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.266942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.267096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.267127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.267258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.267283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.267416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.267441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.267591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.267618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.267774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.267800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.899 qpair failed and we were unable to recover it. 00:24:20.899 [2024-07-24 19:06:58.267979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.899 [2024-07-24 19:06:58.268004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.268134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.268169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.268325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.268351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.268532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.268557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.268710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.268735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.268880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.268909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.269041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.269066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.269228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.269253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.269382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.269407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.269553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.269578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.269753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.269779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.269944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.269968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.270126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.270152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.270298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.270323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.270453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.270478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.270629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.270656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.270784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.270809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.270961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.270986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.271163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.271189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.271330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.271356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.271512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.271538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.271693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.271718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.271839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.271865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.272020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.272046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.272227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.272254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.272382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.272407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.272584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.272609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.272783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.272809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.272960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.272986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.273113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.273139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.273262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.273287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.273442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.273467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.273639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.273664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.273819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.273844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.273973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.274000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.274159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.274186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.274325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.274351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.274509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.900 [2024-07-24 19:06:58.274535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.900 qpair failed and we were unable to recover it. 00:24:20.900 [2024-07-24 19:06:58.274683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.274709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.274859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.274884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.275037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.275062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.275212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.275239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.275391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.275416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.275575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.275600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.275776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.275801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.275926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.275957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.276112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.276138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.276265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.276290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.276443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.276468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.276656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.276681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.276833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.276858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.276993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.277018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.277179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.277205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.277328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.277353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.277502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.277527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.277649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.277674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.277849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.277874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.278003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.278028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.278199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.278225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.278388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.278413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.278563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.278588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.278714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.278739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.278887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.278912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.279067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.279092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.279238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.279264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.279414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.901 [2024-07-24 19:06:58.279439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.901 qpair failed and we were unable to recover it. 00:24:20.901 [2024-07-24 19:06:58.279609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.279634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.279799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.279824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.280001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.280026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.280177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.280203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.280362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.280387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.280519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.280544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.280698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.280723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.280855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.280880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.281057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.281082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.281286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.281312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.281435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.281462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.281614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.281640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.281812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.281838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.281958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.281983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.282129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.282156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.282305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.282331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.282450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.282475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.282646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.282671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.282838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.282863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.282987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.283015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.283165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.283192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.283322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.283347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.283521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.283546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.283697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.283724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.283854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.283880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.284031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.902 [2024-07-24 19:06:58.284056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.902 qpair failed and we were unable to recover it. 00:24:20.902 [2024-07-24 19:06:58.284213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.903 [2024-07-24 19:06:58.284240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.903 qpair failed and we were unable to recover it. 00:24:20.903 [2024-07-24 19:06:58.284393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.903 [2024-07-24 19:06:58.284419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.903 qpair failed and we were unable to recover it. 00:24:20.903 [2024-07-24 19:06:58.284597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.903 [2024-07-24 19:06:58.284622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.903 qpair failed and we were unable to recover it. 00:24:20.903 [2024-07-24 19:06:58.284755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.903 [2024-07-24 19:06:58.284781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.903 qpair failed and we were unable to recover it. 00:24:20.903 [2024-07-24 19:06:58.284908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.903 [2024-07-24 19:06:58.284935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.903 qpair failed and we were unable to recover it. 00:24:20.903 [2024-07-24 19:06:58.285085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.903 [2024-07-24 19:06:58.285116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.903 qpair failed and we were unable to recover it. 00:24:20.903 [2024-07-24 19:06:58.285247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.903 [2024-07-24 19:06:58.285273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.903 qpair failed and we were unable to recover it. 00:24:20.903 [2024-07-24 19:06:58.285430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.903 [2024-07-24 19:06:58.285456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.903 qpair failed and we were unable to recover it. 00:24:20.903 [2024-07-24 19:06:58.285601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.903 [2024-07-24 19:06:58.285627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.903 qpair failed and we were unable to recover it. 00:24:20.903 [2024-07-24 19:06:58.285779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.903 [2024-07-24 19:06:58.285804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.903 qpair failed and we were unable to recover it. 00:24:20.903 [2024-07-24 19:06:58.285970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.903 [2024-07-24 19:06:58.285996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.903 qpair failed and we were unable to recover it. 00:24:20.903 [2024-07-24 19:06:58.286172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.903 [2024-07-24 19:06:58.286197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.903 qpair failed and we were unable to recover it. 00:24:20.903 [2024-07-24 19:06:58.286354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.903 [2024-07-24 19:06:58.286379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.903 qpair failed and we were unable to recover it. 00:24:20.903 [2024-07-24 19:06:58.286528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.903 [2024-07-24 19:06:58.286553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.903 qpair failed and we were unable to recover it. 00:24:20.903 [2024-07-24 19:06:58.286680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.903 [2024-07-24 19:06:58.286705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.903 qpair failed and we were unable to recover it. 00:24:20.903 [2024-07-24 19:06:58.286858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.903 [2024-07-24 19:06:58.286883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.903 qpair failed and we were unable to recover it. 00:24:20.903 [2024-07-24 19:06:58.287033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.903 [2024-07-24 19:06:58.287059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.903 qpair failed and we were unable to recover it. 00:24:20.903 [2024-07-24 19:06:58.287224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.903 [2024-07-24 19:06:58.287259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.903 qpair failed and we were unable to recover it. 00:24:20.903 [2024-07-24 19:06:58.287390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.903 [2024-07-24 19:06:58.287416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.903 qpair failed and we were unable to recover it. 00:24:20.903 [2024-07-24 19:06:58.287565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.903 [2024-07-24 19:06:58.287590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.903 qpair failed and we were unable to recover it. 00:24:20.903 [2024-07-24 19:06:58.287771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.903 [2024-07-24 19:06:58.287797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.903 qpair failed and we were unable to recover it. 00:24:20.903 [2024-07-24 19:06:58.287947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.903 [2024-07-24 19:06:58.287973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.903 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.288129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.288155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.288283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.288308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.288459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.288485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.288663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.288688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.288837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.288862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.289015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.289040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.289194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.289219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.289345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.289370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.289496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.289523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.289654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.289679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.289852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.289878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.290051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.290081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.290266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.290292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.290444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.290470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.290650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.290675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.290803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.290829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.290986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.291011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.291167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.291193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.291357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.291382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.291532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.291558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.291708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.291733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.291862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.291887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.292017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.292044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.292178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.292205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.904 qpair failed and we were unable to recover it. 00:24:20.904 [2024-07-24 19:06:58.292379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.904 [2024-07-24 19:06:58.292404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.292582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.292607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.292759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.292785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.292955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.292980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.293129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.293165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.293326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.293351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.293478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.293503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.293680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.293706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.293858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.293883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.294009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.294034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.294185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.294210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.294389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.294414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.294561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.294586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.294734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.294759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.294917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.294942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.295114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.295140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.295280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.295306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.295459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.295485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.295644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.295670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.295794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.295821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.295971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.295997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.296173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.296200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.296367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.296392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.296539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.296565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.296699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.296725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.296865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.296891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.905 [2024-07-24 19:06:58.297039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.905 [2024-07-24 19:06:58.297064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.905 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.297220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.297252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.297385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.297411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.297583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.297609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.297757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.297782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.297934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.297960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.298088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.298119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.298275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.298300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.298531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.298556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.298707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.298733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.298893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.298918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.299096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.299142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.299301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.299327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.299505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.299530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.299677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.299703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.299830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.299855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.299976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.300002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.300158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.300184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.300329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.300354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.300497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.300523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.300676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.300702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.300875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.300901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.301025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.301051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.301186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.301212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.301363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.301388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.301533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.301558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.301703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.906 [2024-07-24 19:06:58.301728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.906 qpair failed and we were unable to recover it. 00:24:20.906 [2024-07-24 19:06:58.301887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.301912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.302073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.302100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.302237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.302263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.302403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.302429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.302604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.302629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.302805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.302830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.302952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.302979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.303130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.303157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.303314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.303339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.303524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.303549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.303701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.303727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.303905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.303930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.304056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.304083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.304250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.304276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.304398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.304428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.304572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.304598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.304755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.304780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.304958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.304984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.305136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.305163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.305302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.305329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.305479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.305506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.305640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.305665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.305838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.305863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.306010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.306035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.306164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.306190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.306312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.306338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.907 [2024-07-24 19:06:58.306462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.907 [2024-07-24 19:06:58.306487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.907 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.306637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.306663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.306822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.306848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.306972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.306999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.307157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.307183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.307335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.307360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.307509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.307535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.307766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.307791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.307917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.307943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.308091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.308123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.308282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.308307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.308458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.308483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.308633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.308659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.308891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.308916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.309070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.309096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.309283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.309309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.309462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.309488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.309646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.309671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.309796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.309821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.309969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.309995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.310163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.310189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.310320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.310346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.310501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.310527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.310678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.310703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.310833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.310859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.311004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.311030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.908 qpair failed and we were unable to recover it. 00:24:20.908 [2024-07-24 19:06:58.311178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.908 [2024-07-24 19:06:58.311204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.311362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.311387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.311543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.311573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.311706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.311731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.311886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.311912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.312040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.312066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.312212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.312238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.312411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.312436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.312590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.312615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.312791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.312816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.312966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.312993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.313123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.313151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.313314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.313339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.313487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.313514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.313661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.313686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.313836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.313861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.313992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.314018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.314168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.314196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.314349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.314375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.314556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.314581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.314711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.314736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.314864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.314890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.315009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.315034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.315166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.315192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.315344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.315370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.315526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.315552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.909 [2024-07-24 19:06:58.315705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.909 [2024-07-24 19:06:58.315730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.909 qpair failed and we were unable to recover it. 00:24:20.910 [2024-07-24 19:06:58.315855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.910 [2024-07-24 19:06:58.315880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.910 qpair failed and we were unable to recover it. 00:24:20.910 [2024-07-24 19:06:58.316033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.910 [2024-07-24 19:06:58.316058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.910 qpair failed and we were unable to recover it. 00:24:20.910 [2024-07-24 19:06:58.316245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.910 [2024-07-24 19:06:58.316271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.910 qpair failed and we were unable to recover it. 00:24:20.910 [2024-07-24 19:06:58.316399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.910 [2024-07-24 19:06:58.316425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.910 qpair failed and we were unable to recover it. 00:24:20.910 [2024-07-24 19:06:58.316582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.910 [2024-07-24 19:06:58.316608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.910 qpair failed and we were unable to recover it. 00:24:20.910 [2024-07-24 19:06:58.316758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.910 [2024-07-24 19:06:58.316783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.910 qpair failed and we were unable to recover it. 00:24:20.910 [2024-07-24 19:06:58.316935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.910 [2024-07-24 19:06:58.316960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.910 qpair failed and we were unable to recover it. 00:24:20.910 [2024-07-24 19:06:58.317144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.910 [2024-07-24 19:06:58.317169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.910 qpair failed and we were unable to recover it. 00:24:20.910 [2024-07-24 19:06:58.317320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.910 [2024-07-24 19:06:58.317345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.910 qpair failed and we were unable to recover it. 00:24:20.910 [2024-07-24 19:06:58.317477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.910 [2024-07-24 19:06:58.317503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.910 qpair failed and we were unable to recover it. 00:24:20.910 [2024-07-24 19:06:58.317674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.910 [2024-07-24 19:06:58.317699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.910 qpair failed and we were unable to recover it. 00:24:20.910 [2024-07-24 19:06:58.317849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.910 [2024-07-24 19:06:58.317874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.910 qpair failed and we were unable to recover it. 00:24:20.910 [2024-07-24 19:06:58.318026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.910 [2024-07-24 19:06:58.318051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.910 qpair failed and we were unable to recover it. 00:24:20.910 [2024-07-24 19:06:58.318181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.910 [2024-07-24 19:06:58.318208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.910 qpair failed and we were unable to recover it. 00:24:20.910 [2024-07-24 19:06:58.318366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.910 [2024-07-24 19:06:58.318391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.910 qpair failed and we were unable to recover it. 00:24:20.910 [2024-07-24 19:06:58.318540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.910 [2024-07-24 19:06:58.318569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.910 qpair failed and we were unable to recover it. 00:24:20.910 [2024-07-24 19:06:58.318727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.910 [2024-07-24 19:06:58.318753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.910 qpair failed and we were unable to recover it. 00:24:20.910 [2024-07-24 19:06:58.318876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.910 [2024-07-24 19:06:58.318901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.910 qpair failed and we were unable to recover it. 00:24:20.910 [2024-07-24 19:06:58.319054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.910 [2024-07-24 19:06:58.319079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.910 qpair failed and we were unable to recover it. 00:24:20.910 [2024-07-24 19:06:58.319245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.910 [2024-07-24 19:06:58.319272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.910 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.319439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.319465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.319609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.319634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.319804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.319829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.319985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.320011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.320192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.320219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.320346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.320371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.320526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.320552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.320675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.320700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.320826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.320852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.321009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.321035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.321185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.321212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.321362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.321387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.321564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.321590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.321737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.321763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.321891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.321916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.322037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.322062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.322198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.322225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.322356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.322382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.322528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.322553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.322685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.322711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.322860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.322885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.323057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.323082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.323253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.323279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.323434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.323459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.323641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.323667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.323791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.911 [2024-07-24 19:06:58.323816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.911 qpair failed and we were unable to recover it. 00:24:20.911 [2024-07-24 19:06:58.323971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.323997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.324149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.324175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.324332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.324359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.324493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.324520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.324677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.324703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.324858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.324885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.325035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.325061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.325226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.325252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.325430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.325455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.325589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.325619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.325772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.325797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.325927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.325953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.326136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.326162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.326289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.326315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.326442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.326467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.326647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.326673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.326857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.326882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.327011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.327036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.327198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.327224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.327376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.327402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.327553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.327578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.327702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.327728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.327871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.327897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.328054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.328079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.328238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.328264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.328397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.912 [2024-07-24 19:06:58.328423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.912 qpair failed and we were unable to recover it. 00:24:20.912 [2024-07-24 19:06:58.328550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.328576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.328704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.328730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.328854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.328880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.329027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.329052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.329212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.329239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.329387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.329413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.329570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.329596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.329719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.329745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.329899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.329924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.330097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.330129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.330283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.330309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.330441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.330467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.330615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.330640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.330760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.330787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.331020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.331045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.331199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.331225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.331363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.331389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.331565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.331590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.331713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.331738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.331897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.331922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.332072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.332098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.332235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.332261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.332384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.332409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.332557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.332586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.332710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.332735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.332860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.332886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.913 qpair failed and we were unable to recover it. 00:24:20.913 [2024-07-24 19:06:58.333035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.913 [2024-07-24 19:06:58.333061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.333218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.333245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.333398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.333423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.333582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.333608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.333757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.333783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.333908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.333933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.334092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.334125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.334297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.334323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.334476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.334501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.334649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.334675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.334826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.334851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.335009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.335035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.335168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.335194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.335337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.335363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.335518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.335543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.335669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.335695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.335840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.335865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.336012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.336037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.336169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.336195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.336341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.336366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.336521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.336546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.336669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.336694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.336817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.336842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.337023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.337047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.337179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.337205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.337355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.337380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.914 [2024-07-24 19:06:58.337534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.914 [2024-07-24 19:06:58.337559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.914 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.337688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.337714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.337887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.337912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.338041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.338067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.338232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.338258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.338406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.338432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.338570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.338595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.338726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.338751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.338884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.338910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.339083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.339116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.339247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.339273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.339425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.339455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.339610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.339637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.339797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.339822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.339955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.339981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.340132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.340158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.340289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.340314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.340464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.340490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.340644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.340669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.340925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.340951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.341098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.341131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.341258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.341285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.341404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.341429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.341579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.341604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.341761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.341786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.341941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.341967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.342116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.915 [2024-07-24 19:06:58.342143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.915 qpair failed and we were unable to recover it. 00:24:20.915 [2024-07-24 19:06:58.342275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.342300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.342429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.342455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.342632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.342657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.342785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.342812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.342942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.342967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.343129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.343156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.343278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.343303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.343463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.343488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.343615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.343640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.343767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.343792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.343922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.343947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.344121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.344151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.344305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.344331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.344506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.344531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.344686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.344711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.344849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.344875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.345032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.345057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.345211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.345237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.345391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.345416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.345571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.345597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.345717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.345741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.345871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.345896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.346019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.346044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.346277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.346303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.346481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.346506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.346662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.346688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.346840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.916 [2024-07-24 19:06:58.346865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.916 qpair failed and we were unable to recover it. 00:24:20.916 [2024-07-24 19:06:58.346990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.917 [2024-07-24 19:06:58.347015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.917 qpair failed and we were unable to recover it. 00:24:20.917 [2024-07-24 19:06:58.347144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.917 [2024-07-24 19:06:58.347170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.917 qpair failed and we were unable to recover it. 00:24:20.917 [2024-07-24 19:06:58.347294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.917 [2024-07-24 19:06:58.347319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.917 qpair failed and we were unable to recover it. 00:24:20.917 [2024-07-24 19:06:58.347443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.917 [2024-07-24 19:06:58.347468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.917 qpair failed and we were unable to recover it. 00:24:20.917 [2024-07-24 19:06:58.347596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.917 [2024-07-24 19:06:58.347621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.917 qpair failed and we were unable to recover it. 00:24:20.917 [2024-07-24 19:06:58.347742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.917 [2024-07-24 19:06:58.347768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.917 qpair failed and we were unable to recover it. 00:24:20.917 [2024-07-24 19:06:58.347917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.917 [2024-07-24 19:06:58.347943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.917 qpair failed and we were unable to recover it. 00:24:20.917 [2024-07-24 19:06:58.348098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.917 [2024-07-24 19:06:58.348132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.917 qpair failed and we were unable to recover it. 00:24:20.917 [2024-07-24 19:06:58.348284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.917 [2024-07-24 19:06:58.348309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.917 qpair failed and we were unable to recover it. 00:24:20.917 [2024-07-24 19:06:58.348467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.917 [2024-07-24 19:06:58.348493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.917 qpair failed and we were unable to recover it. 00:24:20.917 [2024-07-24 19:06:58.348626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.917 [2024-07-24 19:06:58.348651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.917 qpair failed and we were unable to recover it. 00:24:20.917 [2024-07-24 19:06:58.348795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.917 [2024-07-24 19:06:58.348820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.917 qpair failed and we were unable to recover it. 00:24:20.917 [2024-07-24 19:06:58.348961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.917 [2024-07-24 19:06:58.348986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.917 qpair failed and we were unable to recover it. 00:24:20.917 [2024-07-24 19:06:58.349145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.917 [2024-07-24 19:06:58.349172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.917 qpair failed and we were unable to recover it. 00:24:20.917 [2024-07-24 19:06:58.349322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.917 [2024-07-24 19:06:58.349348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.917 qpair failed and we were unable to recover it. 00:24:20.917 [2024-07-24 19:06:58.349478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.917 [2024-07-24 19:06:58.349502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.917 qpair failed and we were unable to recover it. 00:24:20.917 [2024-07-24 19:06:58.349653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.917 [2024-07-24 19:06:58.349678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.917 qpair failed and we were unable to recover it. 00:24:20.917 [2024-07-24 19:06:58.349802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.917 [2024-07-24 19:06:58.349828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.917 qpair failed and we were unable to recover it. 00:24:20.917 [2024-07-24 19:06:58.349982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.917 [2024-07-24 19:06:58.350007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.917 qpair failed and we were unable to recover it. 00:24:20.917 [2024-07-24 19:06:58.350134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.917 [2024-07-24 19:06:58.350161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.917 qpair failed and we were unable to recover it. 00:24:20.917 [2024-07-24 19:06:58.350324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.917 [2024-07-24 19:06:58.350350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.917 qpair failed and we were unable to recover it. 00:24:20.917 [2024-07-24 19:06:58.350477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.917 [2024-07-24 19:06:58.350503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.350637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.350662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.350811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.350836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.350968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.350998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.351159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.351186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.351319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.351344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.351575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.351600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.351749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.351775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.351948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.351974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.352125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.352160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.352305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.352331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.352483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.352509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.352663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.352688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.352810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.352835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.353010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.353035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.353185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.353211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.353339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.353366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.353531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.353557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.353686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.353711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.353837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.353863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.354015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.354040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.354190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.354216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.354360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.354385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.354513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.354539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.354695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.354720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.354843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.354868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.354997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.355023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.355174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.355200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.355365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.355392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.918 qpair failed and we were unable to recover it. 00:24:20.918 [2024-07-24 19:06:58.355567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.918 [2024-07-24 19:06:58.355593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.355728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.355753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.355888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.355913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.356035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.356062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.356235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.356260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.356395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.356420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.356573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.356599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.356775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.356800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.356930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.356956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.357134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.357160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.357293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.357319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.357467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.357492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.357643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.357668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.357792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.357818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.357959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.357988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.358144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.358169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.358325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.358350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.358507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.358533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.358686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.358711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.358829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.358853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.358974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.359001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.359150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.359177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.359332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.359358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.359535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.359560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.359693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.359718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.359863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.359889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.360036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.360061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.360188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.360215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.360346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.360371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.360527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.919 [2024-07-24 19:06:58.360552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.919 qpair failed and we were unable to recover it. 00:24:20.919 [2024-07-24 19:06:58.360703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.360728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.360854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.360881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.361041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.361066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.361200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.361226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.361379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.361405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.361529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.361554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.361705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.361731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.361856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.361881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.362009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.362034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.362168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.362194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.362323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.362348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.362506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.362532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.362660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.362685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.362833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.362858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.362984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.363011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.363162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.363188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.363317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.363343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.363473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.363499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.363672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.363697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.363823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.363849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.363997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.364024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.364175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.364202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.364329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.364354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.364528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.364553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.364704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.364736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.364859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.364885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.365044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.365069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.365205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.365231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.365388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.365414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.365537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.365562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.365692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.920 [2024-07-24 19:06:58.365718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.920 qpair failed and we were unable to recover it. 00:24:20.920 [2024-07-24 19:06:58.365869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.365895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.366050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.366075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.366210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.366237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.366396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.366421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.366547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.366572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.366721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.366746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.366875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.366902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.367067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.367093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.367232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.367257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.367416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.367441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.367560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.367586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.367736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.367762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.367912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.367937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.368066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.368091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.368263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.368288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.368439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.368465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.368620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.368645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.368765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.368790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.368946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.368971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.369110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.369136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.369292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.369318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.369473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.369499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.369653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.369678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.369826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.369852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.369976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.370001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.370132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.370159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.370312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.370338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.370489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.370514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.370660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.370686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.370864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.370889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.921 [2024-07-24 19:06:58.371015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.921 [2024-07-24 19:06:58.371040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.921 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.371175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.371202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.371328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.371353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.371497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.371528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.371673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.371699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.371823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.371848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.372001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.372027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.372178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.372205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.372356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.372381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.372509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.372535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.372690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.372716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.372846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.372872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.373021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.373046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.373180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.373206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.373329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.373354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.373506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.373532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.373662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.373689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.373820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.373846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.373970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.373995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.374178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.374204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.374355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.374380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.374531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.374556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.374692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.374718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.374848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.374874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.375026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.375052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.375201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.375227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.375374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.375399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.375565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.375590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.375721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.375746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.375900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.375925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.922 [2024-07-24 19:06:58.376176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.922 [2024-07-24 19:06:58.376203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.922 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.376356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.376381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.376502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.376529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.376680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.376706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.376854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.376879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.377021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.377048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.377201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.377227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.377406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.377431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.377555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.377580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.377755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.377780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.377932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.377958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.378080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.378113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.378271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.378297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.378474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.378503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.378658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.378684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.378917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.378942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.379091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.379124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.379356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.379381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.379568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.379593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.379751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.379777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.379939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.379964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.380115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.380141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.380269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.380295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.380431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.380457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.380589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.380616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.380743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.380768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.380901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.380926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.381063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.923 [2024-07-24 19:06:58.381089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.923 qpair failed and we were unable to recover it. 00:24:20.923 [2024-07-24 19:06:58.381228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.381254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.381401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.381427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.381601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.381626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.381752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.381778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.381901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.381927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.382042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.382068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.382224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.382251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.382401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.382428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.382576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.382601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.382758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.382783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.382965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.382991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.383141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.383168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.383296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.383321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.383482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.383508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.383633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.383658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.383783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.383808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.383963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.383988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.384129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.384156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.384284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.384309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.384432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.384458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.384584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.384609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.384786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.384811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.384945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.384970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.385122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.385147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.385302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.385328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.385482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.385511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.385707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.385732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.385859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.385884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.386035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.386061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.386244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.386270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.386416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.386441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.386593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.386619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.386768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.924 [2024-07-24 19:06:58.386794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.924 qpair failed and we were unable to recover it. 00:24:20.924 [2024-07-24 19:06:58.386923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.386949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.387069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.387096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.387280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.387306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.387458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.387483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.387658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.387684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.387805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.387831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.387964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.387990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.388141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.388167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.388292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.388317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.388466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.388492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.388642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.388667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.388809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.388834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.388985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.389009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.389160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.389187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.389361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.389387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.389518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.389544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.389676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.389701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.389824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.389850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.389990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.390016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.390162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.390188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.390342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.390367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.390520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.390545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.390659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.390684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.390834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.390860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.390986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.391012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.391142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.391168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.391297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.391323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.391445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.391471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.391623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.391649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.391802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.391827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.392006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.392031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.392168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.392195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.925 qpair failed and we were unable to recover it. 00:24:20.925 [2024-07-24 19:06:58.392354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.925 [2024-07-24 19:06:58.392384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.392512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.392538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.392713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.392738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.392870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.392895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.393055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.393080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.393215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.393241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.393364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.393390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.393536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.393562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.393712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.393738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.393914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.393939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.394089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.394120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.394249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.394275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.394411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.394437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.394670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.394695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.394858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.394883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.395009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.395035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.395190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.395216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.395369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.395395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.395553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.395579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.395808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.395833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.396010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.396035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.396190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.396216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.396346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.396372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.396523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.396548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.396679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.396704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.396935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.396960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.397112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.397137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.397271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.397297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.397446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.397472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.397593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.397618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.397741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.397767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.397925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.397951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.398098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.398130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.398281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.398308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.398535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.398561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.926 qpair failed and we were unable to recover it. 00:24:20.926 [2024-07-24 19:06:58.398695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.926 [2024-07-24 19:06:58.398722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.398901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.398927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.399084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.399119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.399273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.399299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.399442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.399469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.399616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.399648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.399789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.399815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.399968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.399994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.400124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.400150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.400280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.400306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.400446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.400471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.400622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.400648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.400773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.400799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.401030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.401055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.401177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.401203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.401331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.401356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.401508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.401533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.401684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.401709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.401839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.401866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.402002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.402028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.402183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.402209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.402362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.402388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.402517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.402542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.402690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.402716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.402845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.402870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.402994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.403019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.403144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.403171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.403295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.403320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.403463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.403488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.403640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.403665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.403812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.403837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.403958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.403984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.404127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.404153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.404310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.404336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.404466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.404493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.927 [2024-07-24 19:06:58.404643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.927 [2024-07-24 19:06:58.404669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.927 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.404789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.404814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.404948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.404974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.405110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.405136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.405272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.405297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.405426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.405452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.405580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.405605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.405731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.405756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.405899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.405924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.406072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.406099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.406255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.406285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.406433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.406459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.406582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.406607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.406753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.406779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.406952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.406978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.407128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.407155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.407286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.407313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.407440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.407467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.407641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.407666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.407843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.407869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.408020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.408046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.408197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.408224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.408341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.408367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.408599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.408625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.408794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.408819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.409001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.409027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.409179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.928 [2024-07-24 19:06:58.409205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.928 qpair failed and we were unable to recover it. 00:24:20.928 [2024-07-24 19:06:58.409356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.409383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.409513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.409540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.409690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.409715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.409844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.409870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.409996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.410022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.410151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.410177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.410407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.410433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.410559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.410585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.410740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.410766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.410892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.410918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.411071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.411097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.411235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.411261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.411415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.411440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.411573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.411598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.411719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.411745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.411863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.411889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.412042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.412070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.412232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.412258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.412381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.412407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.412637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.412663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.412814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.412839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.412970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.412996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.413229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.413256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.413390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.413427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.413657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.413682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.413807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.413834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.413993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.414019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.414172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.929 [2024-07-24 19:06:58.414198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.929 qpair failed and we were unable to recover it. 00:24:20.929 [2024-07-24 19:06:58.414321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.414347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.414498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.414524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.414683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.414708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.414837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.414862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.414981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.415008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.415157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.415183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.415335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.415362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.415525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.415551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.415704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.415730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.415909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.415935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.416061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.416086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.416243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.416269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.416407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.416432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.416563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.416589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.416737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.416764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.416916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.416941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.417079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.417112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.417265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.417291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.417446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.417471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.417619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.417645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.417799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.417825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.417984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.418009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.418172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.418199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.418326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.418352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.418480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.418506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.418652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.418677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.418851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.418877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.419034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.419060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.419193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.419220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.419392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.419418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.419537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.419562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.419716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.419741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.419891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.419917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.420042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.420067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.420201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.420227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.420384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.420413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.420573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.420599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.930 qpair failed and we were unable to recover it. 00:24:20.930 [2024-07-24 19:06:58.420772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.930 [2024-07-24 19:06:58.420797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.420927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.420952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.421080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.421118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.421275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.421301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.421432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.421458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.421588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.421613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.421772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.421798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.421947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.421973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.422205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.422232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.422364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.422389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.422522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.422548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.422684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.422710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.422846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.422871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.423031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.423056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.423181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.423207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.423334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.423360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.423530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.423570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.423724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.423751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.423902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.423928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.424078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.424110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.424245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.424270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.424397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.424422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.424555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.424581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.424734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.424759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.424888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.424914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.425049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.425080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.425261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.425287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.425415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.425441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.425559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.425587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.425736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.425761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.425910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.425935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.426095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.426144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.426305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.426332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.426456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.426481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.426633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.426658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.426815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.426840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.426964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.931 [2024-07-24 19:06:58.426989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.931 qpair failed and we were unable to recover it. 00:24:20.931 [2024-07-24 19:06:58.427137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.427177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.427338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.427365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.427504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.427530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.427668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.427694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.427870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.427896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.428080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.428112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.428249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.428277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.428432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.428458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.428580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.428605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.428739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.428764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.428918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.428943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.429069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.429095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.429227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.429253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.429403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.429428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.429578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.429604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.429725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.429755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.429904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.429929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.430050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.430075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.430220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.430246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.430377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.430403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.430523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.430548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.430692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.430717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.430858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.430883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.431003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.431028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.431180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.431206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.431351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.431377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.431510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.431536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.431689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.431714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.431887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.431912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.432040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.432065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.432222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.432248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.432392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.432417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.432563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.432588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.432734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.432759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.932 [2024-07-24 19:06:58.432885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.932 [2024-07-24 19:06:58.432910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.932 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.433041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.433065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.433220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.433245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.433369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.433394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.433548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.433572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.433722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.433747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.433901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.433926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.434055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.434081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.434219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.434249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.434389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.434414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.434535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.434559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.434711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.434736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.434862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.434887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.435065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.435090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.435221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.435246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.435373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.435400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.435550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.435575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.435729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.435754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.435907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.435932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.436082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.436114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.436264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.436289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.436411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.436436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.436597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.436622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.436740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.436765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.436884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.436909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.437039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.437064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.437199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.437225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.437375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.437400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.437531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.437558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.437686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.437711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.437863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.933 [2024-07-24 19:06:58.437888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.933 qpair failed and we were unable to recover it. 00:24:20.933 [2024-07-24 19:06:58.438020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.438045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.438198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.438223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.438371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.438397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.438520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.438546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.438696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.438725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.438881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.438907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.439034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.439060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.439244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.439269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.439423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.439448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.439599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.439623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.439776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.439803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.439959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.439984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.440141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.440167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.440318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.440344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.440497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.440522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.440671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.440695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.440845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.440871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.440991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.441016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.441192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.441218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.441372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.441397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.441561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.441586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.441734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.441759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.441882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.441907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.442055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.442080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.442213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.442238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.442360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.442385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.442508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.442533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.442679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.442704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.442828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.442853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.442981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.443006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.443161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.443189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.443342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.443368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.934 [2024-07-24 19:06:58.443495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.934 [2024-07-24 19:06:58.443520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.934 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.443690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.443715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.443848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.443873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.444033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.444058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.444215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.444241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.444375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.444400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.444544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.444569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.444753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.444778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.444952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.444977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.445158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.445183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.445332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.445357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.445506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.445531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.445659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.445684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.445859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.445899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.446039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.446067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.446236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.446263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.446394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.446421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.446576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.446602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.446787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.446813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.446947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.446973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.447125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.447151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.447301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.447326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.447480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.447507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.447659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.447685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.447839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.447864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.448019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.448046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.448227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.448253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.448383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.448410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.448561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.448586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.448734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.448759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.448880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.448905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:20.935 [2024-07-24 19:06:58.449028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.935 [2024-07-24 19:06:58.449055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:20.935 qpair failed and we were unable to recover it. 00:24:21.218 [2024-07-24 19:06:58.449286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.218 [2024-07-24 19:06:58.449314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.218 qpair failed and we were unable to recover it. 00:24:21.218 [2024-07-24 19:06:58.449443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.218 [2024-07-24 19:06:58.449469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.218 qpair failed and we were unable to recover it. 00:24:21.218 [2024-07-24 19:06:58.449602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.218 [2024-07-24 19:06:58.449630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.218 qpair failed and we were unable to recover it. 00:24:21.218 [2024-07-24 19:06:58.449787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.218 [2024-07-24 19:06:58.449814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.218 qpair failed and we were unable to recover it. 00:24:21.218 [2024-07-24 19:06:58.449966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.218 [2024-07-24 19:06:58.449992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.218 qpair failed and we were unable to recover it. 00:24:21.218 [2024-07-24 19:06:58.450137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.218 [2024-07-24 19:06:58.450164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.218 qpair failed and we were unable to recover it. 00:24:21.218 [2024-07-24 19:06:58.450294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.218 [2024-07-24 19:06:58.450320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.218 qpair failed and we were unable to recover it. 00:24:21.218 [2024-07-24 19:06:58.450475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.218 [2024-07-24 19:06:58.450501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.218 qpair failed and we were unable to recover it. 00:24:21.218 [2024-07-24 19:06:58.450635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.218 [2024-07-24 19:06:58.450661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.218 qpair failed and we were unable to recover it. 00:24:21.218 [2024-07-24 19:06:58.450817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.218 [2024-07-24 19:06:58.450843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.218 qpair failed and we were unable to recover it. 00:24:21.218 [2024-07-24 19:06:58.450974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.218 [2024-07-24 19:06:58.450999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.218 qpair failed and we were unable to recover it. 00:24:21.218 [2024-07-24 19:06:58.451134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.218 [2024-07-24 19:06:58.451161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.218 qpair failed and we were unable to recover it. 00:24:21.218 [2024-07-24 19:06:58.451289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.218 [2024-07-24 19:06:58.451314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.218 qpair failed and we were unable to recover it. 00:24:21.218 [2024-07-24 19:06:58.451469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.218 [2024-07-24 19:06:58.451494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.218 qpair failed and we were unable to recover it. 00:24:21.218 [2024-07-24 19:06:58.451625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.218 [2024-07-24 19:06:58.451650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.218 qpair failed and we were unable to recover it. 00:24:21.218 [2024-07-24 19:06:58.451777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.218 [2024-07-24 19:06:58.451802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.218 qpair failed and we were unable to recover it. 00:24:21.218 [2024-07-24 19:06:58.451925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.218 [2024-07-24 19:06:58.451950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.218 qpair failed and we were unable to recover it. 00:24:21.218 [2024-07-24 19:06:58.452082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.218 [2024-07-24 19:06:58.452112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.218 qpair failed and we were unable to recover it. 00:24:21.218 [2024-07-24 19:06:58.452244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.218 [2024-07-24 19:06:58.452269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.218 qpair failed and we were unable to recover it. 00:24:21.218 [2024-07-24 19:06:58.452394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.218 [2024-07-24 19:06:58.452419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.218 qpair failed and we were unable to recover it. 00:24:21.218 [2024-07-24 19:06:58.452552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.218 [2024-07-24 19:06:58.452578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.218 qpair failed and we were unable to recover it. 00:24:21.218 [2024-07-24 19:06:58.452697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.218 [2024-07-24 19:06:58.452726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.218 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.452877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.452901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.453022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.453047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.453214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.453239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.453363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.453387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.453518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.453543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.453697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.453722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.453848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.453872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.454002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.454026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.454174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.454199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.454325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.454350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.454476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.454502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.454627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.454652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.454828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.454853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.454997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.455036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.455200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.455229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.455382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.455409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.455535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.455561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.455681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.455707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.455857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.455883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.456070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.456097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.456244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.456269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.456388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.456413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.456536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.456561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.456735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.456759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.456935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.456960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.457112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.457137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.457280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.457309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.457441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.457466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.457615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.457640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.457789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.457814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.457988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.458014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.458148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.458173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.458320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.458345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.458500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.458526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.458651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.458676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.458799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.458824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.458975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.458999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.219 [2024-07-24 19:06:58.459153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.219 [2024-07-24 19:06:58.459179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.219 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.459305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.459330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.459454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.459479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.459614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.459639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.459821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.459846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.459994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.460019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.460177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.460202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.460353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.460378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.460554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.460579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.460730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.460754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.460902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.460927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.461055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.461081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.461233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.461258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.461387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.461412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.461598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.461623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.461750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.461775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.461901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.461926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.462075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.462099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.462283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.462307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.462434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.462459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.462586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.462612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.462783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.462808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.462956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.462981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.463117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.463142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.463260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.463285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.463433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.463459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.463613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.463638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.463817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.463842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.463971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.463997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.464123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.464149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.464308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.464347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.464532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.464560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.464711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.464737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.464869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.464895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.465049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.465074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.465232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.465259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.465385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.465411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.465562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.465588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.220 [2024-07-24 19:06:58.465764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.220 [2024-07-24 19:06:58.465789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.220 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.465917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.465944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.466075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.466100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.466256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.466281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.466398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.466423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.466577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.466602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.466762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.466787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.466909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.466934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.467127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.467153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.467282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.467307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.467457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.467482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.467629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.467654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.467787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.467812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.467965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.467990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.468140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.468166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.468294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.468319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.468474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.468499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.468649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.468674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.468794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.468818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.468952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.468980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.469158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.469185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.469416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.469442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.469575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.469602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.469753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.469778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.469912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.469937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.470118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.470146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.470273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.470298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.470448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.470473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.470603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.470629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.470782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.470807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.470961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.470986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.471141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.471166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.471342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.471366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.471522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.471547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.471734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.471759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.471934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.471959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.472082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.472113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.472290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.472315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.221 [2024-07-24 19:06:58.472437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.221 [2024-07-24 19:06:58.472462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.221 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.472597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.472623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.472755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.472780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.472899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.472924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.473045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.473071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.473209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.473235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.473392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.473418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.473554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.473580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.473709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.473737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.473913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.473938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.474083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.474115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.474300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.474325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.474505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.474530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.474660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.474685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.474835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.474860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.475012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.475037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.475162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.475188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.475313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.475338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.475486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.475511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.475686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.475711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.475843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.475868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.475994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.476019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.476175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.476200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.476348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.476373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.476549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.476573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.476722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.476746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.476896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.476920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.477045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.477070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.477205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.477230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.477384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.477410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.477567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.477591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.477720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.477745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.477897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.477923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.478074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.478098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.478241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.478268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.222 qpair failed and we were unable to recover it. 00:24:21.222 [2024-07-24 19:06:58.478389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.222 [2024-07-24 19:06:58.478419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.478569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.478594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.478745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.478771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.478903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.478928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.479076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.479108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.479263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.479288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.479466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.479491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.479612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.479637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.479763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.479788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.479981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.480021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.480217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.480245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.480375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.480401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.480547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.480573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.480726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.480752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.480891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.480917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.481068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.481094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.481260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.481285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.481425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.481450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.481568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.481593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.481742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.481767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.481886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.481911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.482036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.482061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.482237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.482262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.482404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.482429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.482581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.482607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.482738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.482764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.482917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.482941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.483089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.483125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.483248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.483273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.483393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.483417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.483545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.483574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.483700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.483725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.483877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.483901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.484024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.484049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.223 qpair failed and we were unable to recover it. 00:24:21.223 [2024-07-24 19:06:58.484215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.223 [2024-07-24 19:06:58.484242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.484392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.484416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.484575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.484600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.484773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.484798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.484951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.484975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.485119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.485156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.485309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.485334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.485469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.485495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.485647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.485672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.485862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.485887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.486040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.486065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.486223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.486248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.486402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.486428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.486580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.486606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.486725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.486751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.486924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.486949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.487073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.487098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.487242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.487267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.487422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.487447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.487574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.487600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.487728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.487757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.487885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.487910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.488057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.488082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.488233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.488258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.488415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.488439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.488586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.488610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.488762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.488787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.488963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.488988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.489114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.489140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.489289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.489314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.489472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.489497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.489615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.489640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.489765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.489791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.489919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.489945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.490096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.490127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.490260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.490285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.490407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.490432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.490605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.490630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.224 [2024-07-24 19:06:58.490756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.224 [2024-07-24 19:06:58.490781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.224 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.490957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.490982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.491126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.491152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.491272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.491298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.491416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.491441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.491591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.491616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.491759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.491784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.491908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.491933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.492055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.492080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.492246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.492271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.492414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.492439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.492561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.492586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.492761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.492786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.492902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.492926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.493081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.493114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.493279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.493304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.493461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.493487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.493642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.493667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.493792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.493819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.493949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.493974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.494125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.494162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.494311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.494337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.494514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.494539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.494668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.494693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.494846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.494872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.495032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.495057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.495202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.495228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.495362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.495387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.495536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.495561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.495714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.495738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.495866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.495892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.496017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.496042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.496171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.496197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.496325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.496350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.496500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.496525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.496675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.496700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.496861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.496886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.497036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.497061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.497250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.225 [2024-07-24 19:06:58.497276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.225 qpair failed and we were unable to recover it. 00:24:21.225 [2024-07-24 19:06:58.497402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.497427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.497559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.497584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.497709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.497734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.497895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.497920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.498091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.498125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.498241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.498266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.498394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.498419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.498561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.498586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.498743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.498768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.498893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.498918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.499073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.499098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.499312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.499341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.499477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.499502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.499628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.499652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.499782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.499807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.499963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.499988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.500115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.500141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.500300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.500325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.500478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.500503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.500655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.500680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.500831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.500856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.501007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.501031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.501154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.501180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.501332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.501357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.501510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.501535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.501711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.501736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.501861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.501885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.502029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.502054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.502210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.502236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.502395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.502420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.502580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.502605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.502759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.502785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.502912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.502937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.503085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.503126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.503262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.503287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.503415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.503441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.503566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.503591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.503711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.226 [2024-07-24 19:06:58.503736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.226 qpair failed and we were unable to recover it. 00:24:21.226 [2024-07-24 19:06:58.503863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.503893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.504043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.504069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.504227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.504253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.504427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.504452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.504599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.504624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.504754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.504780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.504949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.504974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.505125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.505151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.505306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.505331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.505473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.505498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.505633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.505658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.505807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.505832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.505974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.505999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.506127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.506152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.506333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.506359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.506495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.506520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.506672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.506696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.506829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.506855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.507001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.507026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.507164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.507190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.507342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.507368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.507492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.507516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.507668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.507693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.507837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.507862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.507992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.508018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.508171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.508198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.508348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.508373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.508502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.508527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.508706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.508731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.508880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.508905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.509032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.509057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.509204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.227 [2024-07-24 19:06:58.509230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.227 qpair failed and we were unable to recover it. 00:24:21.227 [2024-07-24 19:06:58.509377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.509402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.509530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.509555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.509707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.509732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.509913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.509938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.510050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.510075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.510236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.510261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.510415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.510439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.510584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.510609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.510742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.510767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.510922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.510947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.511095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.511149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.511317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.511342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.511490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.511515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.511640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.511666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.511836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.511861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.511984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.512008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.512157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.512183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.512331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.512356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.512504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.512529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.512658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.512683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.512808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.512832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.512982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.513008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.513161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.513187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.513346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.513371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.513499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.513524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.513647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.513671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.513801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.513827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.513983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.514009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.514131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.514156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.514304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.514329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.514454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.514479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.514654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.514678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.514851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.514876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.515038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.515078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.515225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.515253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.515427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.515453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.515614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.515640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.515795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.515819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.228 qpair failed and we were unable to recover it. 00:24:21.228 [2024-07-24 19:06:58.515993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.228 [2024-07-24 19:06:58.516018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.516142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.516168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.516309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.516334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.516462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.516487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.516635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.516660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.516835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.516860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.517037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.517061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.517198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.517224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.517376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.517401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.517530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.517554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.517709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.517733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.517871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.517896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.518027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.518052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.518208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.518233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.518354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.518380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.518500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.518525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.518679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.518704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.518855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.518880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.519006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.519031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.519157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.519182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.519305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.519330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.519477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.519502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.519632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.519657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.519790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.519815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.519966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.519991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.520140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.520171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.520295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.520320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.520453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.520478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.520628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.520653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.520801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.520826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.520982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.521006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.521167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.521193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.521348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.521373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.521528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.521553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.521682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.521708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.521855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.521880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.522034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.522059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.522204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.522243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.229 qpair failed and we were unable to recover it. 00:24:21.229 [2024-07-24 19:06:58.522406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.229 [2024-07-24 19:06:58.522432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.522593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.522621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.522853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.522878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.523063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.523089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.523224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.523250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.523408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.523433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.523560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.523586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.523740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.523766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.523920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.523946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.524097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.524131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.524260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.524285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.524411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.524436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.524588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.524613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.524784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.524810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.524984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.525014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.525167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.525193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.525339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.525364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.525538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.525564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.525720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.525744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.525889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.525914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.526061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.526086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.526222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.526247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.526403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.526428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.526588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.526613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.526748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.526773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.526952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.526977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.527118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.527144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.527302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.527327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.527458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.527483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.527610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.527635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.527783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.527808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.527986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.528011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.528139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.528164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.528318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.528343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.528494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.528520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.528637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.528662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.528810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.528835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.528982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.529007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.230 [2024-07-24 19:06:58.529167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.230 [2024-07-24 19:06:58.529207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.230 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.529366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.529393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.529518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.529544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.529673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.529706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.529858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.529884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.530060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.530086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.530221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.530248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.530399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.530424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.530582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.530607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.530732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.530756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.530904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.530929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.531070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.531095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.531239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.531266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.531445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.531471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.531628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.531656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.531819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.531844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.531967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.531993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.532146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.532172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.532347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.532372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.532523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.532548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.532674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.532699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.532823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.532849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.533004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.533030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.533181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.533207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.533349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.533374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.533523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.533548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.533668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.533693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.533870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.533894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.534015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.534041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.534178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.534204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.534359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.534388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.534517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.534542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.534720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.534745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.534875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.534900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.535075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.535100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.535258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.535283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.535438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.535467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.535617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.535644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.231 [2024-07-24 19:06:58.535818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.231 [2024-07-24 19:06:58.535843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.231 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.535991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.536017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.536187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.536214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.536339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.536365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.536492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.536518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.536695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.536720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.536871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.536897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.537047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.537072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.537229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.537255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.537407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.537432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.537560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.537585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.537728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.537754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.537910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.537935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.538086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.538133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.538313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.538338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.538471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.538496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.538670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.538695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.538830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.538855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.539008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.539032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.539165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.539196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.539349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.539374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.539503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.539528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.539706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.539730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.539863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.539888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.540037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.540062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.540211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.540239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.540391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.540416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.540547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.540571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.540746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.540771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.540933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.540958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.541127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.541153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.541314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.541339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.232 qpair failed and we were unable to recover it. 00:24:21.232 [2024-07-24 19:06:58.541462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.232 [2024-07-24 19:06:58.541487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.541665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.541689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.541861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.541885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.542042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.542067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.542216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.542247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.542397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.542422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.542576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.542601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.542756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.542781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.542907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.542932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.543065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.543089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.543258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.543283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.543403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.543427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.543579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.543603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.543754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.543778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.543921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.543951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.544121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.544155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.544297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.544322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.544476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.544501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.544614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.544639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.544765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.544789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.544963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.544988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.545155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.545180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.545335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.545360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.545511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.545535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.545667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.545693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.545811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.545836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.545963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.545988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.546117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.546152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.546337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.546362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.546513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.546538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.546684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.546709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.546833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.546858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.546996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.547035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.547209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.547237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.547395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.547421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.547577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.547603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.547754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.547779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.547928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.547954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.233 [2024-07-24 19:06:58.548071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.233 [2024-07-24 19:06:58.548096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.233 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.548270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.548295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.548459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.548484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.548613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.548643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.548795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.548820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.548980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.549005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.549156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.549183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.549314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.549340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.549464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.549489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.549643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.549670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.549825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.549850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.550005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.550031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.550175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.550214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.550381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.550409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.550553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.550579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.550711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.550736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.550884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.550909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.551086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.551119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.551256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.551281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.551405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.551430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.551584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.551609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.551772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.551799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.551975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.552000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.552130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.552156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.552284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.552309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.552439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.552465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.552619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.552645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.552768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.552794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.552977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.553002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.553153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.553179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.553312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.553343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.553472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.553497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.553646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.553671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.553811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.553836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.553988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.554012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.554187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.554213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.554337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.554363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.554536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.554561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.234 [2024-07-24 19:06:58.554688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-07-24 19:06:58.554714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.234 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.554868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.554894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.555028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.555055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.555202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.555228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.555384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.555409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.555563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.555588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.555716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.555741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.555866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.555891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.556043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.556068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.556200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.556225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.556402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.556427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.556587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.556611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.556767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.556792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.556921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.556948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.557084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.557116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.557247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.557272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.557395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.557421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.557540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.557565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.557712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.557737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.557897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.557925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.558054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.558079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.558281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.558320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.558450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.558478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.558635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.558663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.558796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.558823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.559002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.559028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.559170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.559197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.559355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.559382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.559508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.559534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.559658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.559683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.559812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.559837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.559963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.559987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.560133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.560158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.560297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.560322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.560443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.560468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.560625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.560651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.560792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.560817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.560982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.561022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.235 [2024-07-24 19:06:58.561164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.235 [2024-07-24 19:06:58.561202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.235 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.561329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.561355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.561481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.561506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.561657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.561684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.561810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.561835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.561983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.562008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.562153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.562179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.562341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.562366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.562528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.562554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.562710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.562736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.562885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.562910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.563035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.563060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.563224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.563252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.563399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.563423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.563597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.563622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.563779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.563804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.563953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.563977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.564128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.564153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.564299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.564325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.564460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.564484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.564617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.564643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.564801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.564831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.564984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.565008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.565147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.565172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.565311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.565337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.565509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.565535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.565715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.565740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.565863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.565889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.566044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.566070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.566279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.566318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.566474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.566500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.566624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.566649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.566779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.236 [2024-07-24 19:06:58.566804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.236 qpair failed and we were unable to recover it. 00:24:21.236 [2024-07-24 19:06:58.566956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.566981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.567111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.567137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.567299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.567324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.567447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.567472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.567650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.567675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.567827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.567851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.568026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.568051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.568180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.568206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.568362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.568387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.568570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.568595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.568753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.568779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.568911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.568936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.569092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.569129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.569313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.569338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.569513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.569538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.569670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.569700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.569822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.569846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.569998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.570023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.570206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.570232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.570388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.570414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.570561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.570586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.570767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.570792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.570928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.570953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.571083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.571114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.571266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.571291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.571416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.571441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.571595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.571621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.571775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.571801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.571947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.571972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.572168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.572208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.572343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.572370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.572521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.572546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.572707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.572732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.572861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.572888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.573066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.573092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.573271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.573297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.573413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.573438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.573565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-07-24 19:06:58.573591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.237 qpair failed and we were unable to recover it. 00:24:21.237 [2024-07-24 19:06:58.573775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.573801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.573930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.573955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.574116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.574142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.574275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.574300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.574424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.574453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.574604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.574629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.574754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.574779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.574954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.574979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.575136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.575175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.575325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.575352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.575504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.575531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.575657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.575683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.575805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.575833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.575988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.576014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.576184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.576211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.576370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.576396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.576527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.576554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.576704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.576730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.576923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.576962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.577097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.577130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.577276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.577302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.577430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.577456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.577617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.577644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.577773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.577798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.577927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.577953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.578077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.578109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.578241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.578268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.578417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.578443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.578566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.578591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.578746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.578771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.578895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.578921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.579060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.579085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.579257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.579281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.579412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.579439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.579591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.579617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.579774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.579799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.579967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.579993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.580143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.238 [2024-07-24 19:06:58.580168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.238 qpair failed and we were unable to recover it. 00:24:21.238 [2024-07-24 19:06:58.580323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.580348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.580504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.580529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.580650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.580675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.580789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.580813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.580947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.580973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.581127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.581157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.581291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.581322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.581454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.581480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.581610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.581634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.581763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.581789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.581945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.581969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.582115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.582147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.582304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.582329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.582486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.582510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.582644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.582669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.582818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.582843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.582985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.583010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.583165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.583189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.583348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.583373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.583530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.583556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.583689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.583714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.583868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.583895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.584025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.584049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.584177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.584203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.584355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.584381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.584510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.584535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.584686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.584711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.584832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.584858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.585014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.585038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.585196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.585222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.585388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.585426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.585606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.585631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.585763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.585790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.585936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.585967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.586088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.586123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.586249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.586274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.586402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.586427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.239 [2024-07-24 19:06:58.586583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.239 [2024-07-24 19:06:58.586608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.239 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.586755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.586780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.586926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.586953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.587108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.587133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.587284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.587308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.587487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.587513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.587664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.587690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.587816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.587841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.587993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.588018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.588171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.588197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.588331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.588356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.588508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.588533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.588683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.588708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.588858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.588882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.589011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.589036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.589191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.589218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.589367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.589392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.589542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.589568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.589718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.589743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.589877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.589901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.590023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.590047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.590218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.590258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.590396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.590423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.590576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.590607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.590762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.590788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.590944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.590971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.591155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.591182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.591318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.591345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.591531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.591557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.591735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.591761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.591892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.591920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.592050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.592075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.592229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.592256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.592410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.592435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.592585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.592610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.592737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.592762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.592889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.592913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.593038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.593063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.240 [2024-07-24 19:06:58.593186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.240 [2024-07-24 19:06:58.593211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.240 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.593357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.593382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.593557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.593582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.593703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.593729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.593907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.593932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.594058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.594083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.594228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.594254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.594410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.594436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.594587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.594612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.594739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.594764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.594922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.594948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.595074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.595099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.595230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.595259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.595397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.595421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.595542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.595566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.595718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.595743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.595920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.595945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.596120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.596154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.596310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.596335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.596450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.596475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.596652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.596677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.596808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.596833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.596958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.596982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.597128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.597161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.597297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.597322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.597475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.597500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.597627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.597651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.597773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.597797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.597949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.597978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.598134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.598168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.598309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.598335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.598489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.598515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.598666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.598692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.598825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.241 [2024-07-24 19:06:58.598850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.241 qpair failed and we were unable to recover it. 00:24:21.241 [2024-07-24 19:06:58.599005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.599031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.599189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.599216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.599338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.599364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.599489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.599514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.599666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.599690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.599829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.599858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.599987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.600012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.600161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.600187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.600308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.600333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.600456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.600481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.600596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.600621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.600742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.600767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.600883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.600908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.601036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.601061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.601184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.601209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.601358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.601383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.601529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.601554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.601689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.601714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.601870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.601898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.602039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.602065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.602241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.602268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.602402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.602429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.602581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.602607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.602735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.602763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.602896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.602923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.603077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.603109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.603278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.603303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.603457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.603482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.603611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.603636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.603784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.603809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.603959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.603985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.604122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.604158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.604294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.604327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.604446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.604471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.604621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.604645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.604801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.604826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.604980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.605008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.242 qpair failed and we were unable to recover it. 00:24:21.242 [2024-07-24 19:06:58.605134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.242 [2024-07-24 19:06:58.605165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.605332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.605359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.605491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.605517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.605648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.605674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.605855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.605881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.606011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.606037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.606192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.606218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.606343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.606369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.606504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.606530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.606689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.606716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.606847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.606874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.607004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.607031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.607185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.607210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.607333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.607358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.607482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.607507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.607627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.607652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.607782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.607807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.607982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.608006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.608162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.608187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.608307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.608333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.608454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.608479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.608609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.608634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.608766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.608795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.608973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.608998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.609116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.609142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.609272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.609297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.609448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.609473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.609615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.609640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.609795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.609819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.609947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.609972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.610087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.610118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.610281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.610306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.610478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.610503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.610645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.610669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.610787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.610812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.610941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.610969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.611143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.611170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.611329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.611355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.243 qpair failed and we were unable to recover it. 00:24:21.243 [2024-07-24 19:06:58.611536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.243 [2024-07-24 19:06:58.611562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.611695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.611721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.611897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.611923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.612055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.612081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.612264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.612289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.612418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.612443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.612587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.612612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.612742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.612768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.612923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.612949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.613076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.613115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.613277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.613302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.613432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.613464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.613639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.613664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.613798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.613823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.613971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.613996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.614149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.614175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.614322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.614347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.614525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.614550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.614681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.614706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.614863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.614888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.615041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.615066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.615189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.615214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.615347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.615372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.615525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.615552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.615706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.615731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.615888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.615914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.616065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.616090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.616261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.616301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.616447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.616476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.616647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.616673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.616849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.616875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.617055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.617081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.617220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.617247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.617402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.617428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.617605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.617632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.617788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.617814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.617946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.617971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.618125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.618151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.244 qpair failed and we were unable to recover it. 00:24:21.244 [2024-07-24 19:06:58.618290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.244 [2024-07-24 19:06:58.618329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.618493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.618521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.618680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.618705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.618879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.618904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.619084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.619116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.619271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.619297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.619456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.619482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.619635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.619660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.619779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.619804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.619970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.619997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.620159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.620186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.620339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.620365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.620540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.620566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.620723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.620749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.620887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.620914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.621084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.621115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.621248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.621274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.621432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.621458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.621580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.621606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.621752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.621778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.621957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.621982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.622140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.622167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.622318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.622344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.622493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.622519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.622647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.622673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.622850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.622876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.623010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.623036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.623167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.623193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.623343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.623369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.623498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.623525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.623683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.623709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.623905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.623943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.624118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.624145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.624297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.624323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.624447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.624472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.624622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.624647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.624770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.624795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.245 qpair failed and we were unable to recover it. 00:24:21.245 [2024-07-24 19:06:58.624990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.245 [2024-07-24 19:06:58.625015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.625143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.625169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.625322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.625347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.625495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.625525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.625672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.625697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.625872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.625897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.626033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.626057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.626216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.626243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.626399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.626425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.626554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.626579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.626724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.626749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.626923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.626948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.627118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.627144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.627274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.627299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.627479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.627504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.627653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.627678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.627804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.627829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.628021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.628060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.628229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.628257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.628390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.628415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.628568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.628592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.628748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.628773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.628902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.628927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.629088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.629122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.629286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.629312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.629465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.629490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.629645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.629670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.629803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.629828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.629975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.630000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.630159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.630185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.630306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.630335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.630462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.630487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.630635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.246 [2024-07-24 19:06:58.630660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.246 qpair failed and we were unable to recover it. 00:24:21.246 [2024-07-24 19:06:58.630806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.630831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.630976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.631001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.631154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.631179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.631304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.631330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.631446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.631471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.631603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.631629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.631807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.631832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.631980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.632005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.632128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.632154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.632275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.632300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.632480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.632505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.632669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.632693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.632841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.632865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.632990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.633016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.633174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.633199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.633328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.633352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.633473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.633498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.633650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.633675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.633800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.633825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.633979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.634004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.634133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.634159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.634287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.634312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.634431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.634456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.634599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.634624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.634802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.634831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.634964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.634989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.635154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.635180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.635301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.635326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.635476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.635501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.635677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.635702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.635849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.635873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.635996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.636021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.636195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.636221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.636338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.636363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.247 [2024-07-24 19:06:58.636548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.247 [2024-07-24 19:06:58.636573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.247 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.636729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.636753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.636934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.636958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.637137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.637162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.637304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.637343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.637509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.637536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.637694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.637719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.637842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.637866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.638018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.638043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.638223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.638249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.638377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.638403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.638554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.638580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.638749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.638773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.638902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.638926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.639083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.639115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.639250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.639276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.639430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.639457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.639585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.639614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.639740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.639765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.639898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.639923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.640042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.640067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.640201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.640237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.640387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.640412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.640539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.640564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.640715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.640741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.640888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.640916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.641071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.641097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.641260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.641286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.641459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.641484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.641613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.641639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.641787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.641812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.641995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.642021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.642175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.642200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.642372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.642397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.642548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.642583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.642743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.642768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.642886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.642911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.643040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.643067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.248 qpair failed and we were unable to recover it. 00:24:21.248 [2024-07-24 19:06:58.643252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-07-24 19:06:58.643278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.643432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.643457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.643612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.643636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.643760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.643785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.643945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.643970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.644118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.644143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.644276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.644307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.644435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.644462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.644621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.644646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.644794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.644819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.644952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.644977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.645132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.645158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.645286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.645311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.645442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.645466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.645594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.645619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.645770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.645795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.645943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.645968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.646141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.646167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.646320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.646344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.646464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.646490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.646644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.646669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.646820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.646844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.646999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.647024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.647181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.647207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.647334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.647359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.647508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.647533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.647683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.647709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.647839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.647863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.647986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.648011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.648164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.648189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.648311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.648336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.648468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.648495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.648671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.648696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.648860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.648885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.649010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.649035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.649192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.649217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.649344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.249 [2024-07-24 19:06:58.649369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.249 qpair failed and we were unable to recover it. 00:24:21.249 [2024-07-24 19:06:58.649548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.649573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.649728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.649752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.649876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.649902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.650054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.650079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.650233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.650273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.650410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.650437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.650588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.650613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.650771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.650796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.650944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.650969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.651096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.651135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.651263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.651289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.651441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.651467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.651621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.651646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.651802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.651827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.651979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.652004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.652160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.652186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.652315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.652340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.652490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.652515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.652672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.652697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.652870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.652895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.653047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.653072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.653264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.653290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.653462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.653487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.653665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.653691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.653820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.653845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.654003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.654031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.654191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.654217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.654342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.654366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.654545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.654571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.654702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.654728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.654907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.654931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.655117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.655144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.655275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.250 [2024-07-24 19:06:58.655299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.250 qpair failed and we were unable to recover it. 00:24:21.250 [2024-07-24 19:06:58.655421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.655446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.655602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.655627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.655804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.655829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.655949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.655978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.656148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.656174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.656337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.656362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.656537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.656563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.656727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.656752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.656871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.656895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.657030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.657055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.657216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.657241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.657392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.657417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.657550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.657575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.657724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.657749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.657919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.657944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.658070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.658094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.658258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.658283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.658448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.658488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.658654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.658681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.658807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.658835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.659016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.659042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.659169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.659196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.659346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.659372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.659550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.659576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.659729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.659754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.659904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.659929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.660077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.660109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.251 qpair failed and we were unable to recover it. 00:24:21.251 [2024-07-24 19:06:58.660288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.251 [2024-07-24 19:06:58.660312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.660470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.660494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.660670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.660695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.660815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.660846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.660977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.661002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.661156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.661182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.661311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.661336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.661460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.661485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.661608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.661633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.661753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.661778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.661905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.661930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.662111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.662136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.662264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.662289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.662428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.662467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.662628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.662656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.662783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.662808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.662950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.662975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.663125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.663164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.663326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.663354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.663506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.663533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.663694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.663720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.663846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.663871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.664031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.664058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.664217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.664242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.664363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.664388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.664543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.664568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.664691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.664715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.664841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.664866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.664986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.665011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.665185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.665211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.252 [2024-07-24 19:06:58.665339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.252 [2024-07-24 19:06:58.665368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.252 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.665524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.665548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.665695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.665720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.665841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.665868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.666019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.666044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.666195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.666221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.666376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.666401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.666525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.666550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.666706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.666732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.666860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.666885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.667013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.667038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.667218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.667244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.667366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.667391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.667515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.667540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.667725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.667750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.667904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.667928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.668111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.668136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.668266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.668291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.668446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.668471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.668619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.668644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.668775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.668800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.668955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.668980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.669136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.669162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.669287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.669312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.669464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.669490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.669621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.669646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.669800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.669825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.669964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.669989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.670146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.670171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.670323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.670348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.670518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.670543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.670697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.253 [2024-07-24 19:06:58.670722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.253 qpair failed and we were unable to recover it. 00:24:21.253 [2024-07-24 19:06:58.670851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.670876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.671031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.671056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.671186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.671213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.671363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.671388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.671538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.671563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.671718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.671743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.671893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.671918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.672076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.672112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.672238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.672263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.672399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.672438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.672600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.672628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.672783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.672808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.672963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.672987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.673119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.673147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.673323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.673348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.673510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.673537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.673706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.673731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.673881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.673906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.674058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.674083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.674222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.674246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.674371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.674395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.674535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.674560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.674686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.674711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.674832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.674857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.675010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.675034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.675162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.675188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.675357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.675396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.675563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.675590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.675721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.675749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.675928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.675954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.676115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.254 [2024-07-24 19:06:58.676141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.254 qpair failed and we were unable to recover it. 00:24:21.254 [2024-07-24 19:06:58.676284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.676310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.676467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.676493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.676649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.676675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.676854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.676879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.676998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.677025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.677188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.677227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.677388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.677415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.677542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.677583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.677743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.677770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.677898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.677923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.678098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.678130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.678261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.678287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.678447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.678471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.678621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.678646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.678828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.678854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.679027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.679052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.679185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.679211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.679364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.679389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.679542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.679573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.679725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.679750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.679883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.679908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.680082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.680112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.680244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.680270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.680393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.680417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.680566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.680590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.680713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.680739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.680914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.680940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.681095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.681125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.681277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.681303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.681455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.681480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.681602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.681628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.681750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.681775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.682015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.682040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.682168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.682193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.255 qpair failed and we were unable to recover it. 00:24:21.255 [2024-07-24 19:06:58.682321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.255 [2024-07-24 19:06:58.682347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.682520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.682545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.682679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.682703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.682852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.682878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.683026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.683052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.683207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.683234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.683386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.683410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.683542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.683567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.683691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.683717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.683873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.683898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.684087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.684134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.684297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.684338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.684499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.684526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.684684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.684709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.684863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.684890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.685017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.685043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.685178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.685205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.685364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.685390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.685570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.685596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.685722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.685746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.685900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.685925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.686066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.686112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.686281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.686309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.686436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.686462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.686639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.686670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.256 [2024-07-24 19:06:58.686803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.256 [2024-07-24 19:06:58.686828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.256 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.686979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.687005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.687159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.687186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.687317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.687343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.687465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.687491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.687640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.687666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.687839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.687865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.687999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.688026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.688205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.688234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.688370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.688409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.688564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.688591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.688747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.688772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.688924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.688949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.689099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.689129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.689292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.689317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.689492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.689517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.689650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.689676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.689800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.689825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.689948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.689973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.690118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.690144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.690296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.690321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.690478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.690503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.690652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.690677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.690849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.690873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.691028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.691056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.691208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.691234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.691365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.691404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.691540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.691566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.691717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.691743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.691872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.691898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.692076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.692110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.692264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.692289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.692411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.692436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.692568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.692595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.692713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.692738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.692888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.692913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.693032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.257 [2024-07-24 19:06:58.693059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.257 qpair failed and we were unable to recover it. 00:24:21.257 [2024-07-24 19:06:58.693217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.693243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.693398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.693426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.693556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.693588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.693715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.693741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.693859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.693886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.694038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.694063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.694203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.694228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.694372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.694398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.694522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.694547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.694674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.694700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.694857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.694884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.695035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.695061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.695192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.695219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.695371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.695396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.695548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.695574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.695698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.695724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.695883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.695907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.696027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.696051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.696207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.696233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.696354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.696379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.696536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.696562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.696688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.696714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.696854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.696881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.697030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.697056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.697212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.697238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.697394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.697419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.697596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.697621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.697748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.697773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.697927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.697951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.698091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.698137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.698271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.698300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.698485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.698511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.698666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.698693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.698875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.698901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.699023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.699048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.699205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.699232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.258 [2024-07-24 19:06:58.699404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.258 [2024-07-24 19:06:58.699430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.258 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.699558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.699583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.699736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.699762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.699937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.699976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.700126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.700165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.700325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.700363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.700550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.700582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.700739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.700765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.700927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.700953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.701133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.701160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.701286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.701311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.701467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.701491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.701648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.701673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.701795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.701820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.701983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.702007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.702168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.702194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.702346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.702371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.702521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.702546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.702704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.702730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.702856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.702881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.703026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.703051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.703211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.703238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.703357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.703382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.703542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.703566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.703716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.703741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.703912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.703937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.704087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.704129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.704279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.704305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.704435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.704460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.704593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.704619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.704778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.704805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.704963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.704989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.705170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.705197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.705382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.705426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.705588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.705616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.705777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.705803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.705981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.706007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.706136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.706162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.706320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.259 [2024-07-24 19:06:58.706347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.259 qpair failed and we were unable to recover it. 00:24:21.259 [2024-07-24 19:06:58.706476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.706502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.706625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.706651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.706781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.706809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.706964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.706991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.707140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.707178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.707338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.707365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.707524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.707549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.707675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.707700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.707857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.707882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.708035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.708060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.708258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.708298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.708441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.708480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.708640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.708667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.708799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.708825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.708952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.708977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.709164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.709190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.709340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.709365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.709524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.709550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.709706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.709732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.709889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.709914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.710044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.710069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.710208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.710234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.710363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.710389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.710516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.710543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.710694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.710719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.710846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.710871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.711001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.711027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.711180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.711205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.711343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.711367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.711542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.711568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.711723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.711747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.711903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.711927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.712111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.712137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.712294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.712319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.712497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.712526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.712648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.712672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.712819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.712845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.712993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.713017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.713198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.713224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.713353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.713379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.713532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.713558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.713703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.713728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.713856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.713881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.714007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.714031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.714207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.260 [2024-07-24 19:06:58.714232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.260 qpair failed and we were unable to recover it. 00:24:21.260 [2024-07-24 19:06:58.714379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.714403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.714524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.714549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.714702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.714726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.714883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.714908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.715035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.715060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.715313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.715339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.715522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.715547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.715677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.715703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.715878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.715903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.716074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.716120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.716267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.716297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.716427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.716453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.716614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.716639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.716762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.716788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.716945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.716971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.717127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.717153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.717292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.717319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.717477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.717503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.717627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.717653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.717781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.717809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.717966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.717991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.718146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.718172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.718326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.718352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.718511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.718536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.718696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.718721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.718895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.718921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.719071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.719097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.719248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.719274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.719431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.719469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.719628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.719660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.719809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.719834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.719982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.720007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.720164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.720190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.720316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.720341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.720477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.720502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.720656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.720680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.720806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.720830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.720978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.721002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.721137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.721176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.721364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.721390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.261 [2024-07-24 19:06:58.721567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.261 [2024-07-24 19:06:58.721593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.261 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.721725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.721750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.721914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.721953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.722096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.722130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.722264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.722289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.722417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.722442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.722570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.722598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.722756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.722783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.722938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.722965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.723115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.723141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.723262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.723287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.723433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.723458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.723588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.723614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.723737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.723764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.723941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.723967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.724093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.724124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.724258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.724288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.724444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.724469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.724590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.724615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.724769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.724796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.724951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.724976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.725130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.725156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.725282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.725307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.725454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.725478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.725597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.725623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.725739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.725764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.725942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.725967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.726128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.726154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.726273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.726298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.726473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.726498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.726628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.726653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.726771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.726796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.726974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.726999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.727117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.727143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.727298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.727323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.727475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.727499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.727646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.727671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.727799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.727825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.727977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.728006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.728141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.728168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.728292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.728318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.728469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.728494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.728652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.728678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.728815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.728846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.729023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.729050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.729208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.729233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.729369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.729394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.262 [2024-07-24 19:06:58.729556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.262 [2024-07-24 19:06:58.729581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.262 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.729714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.729739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.729916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.729940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.730089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.730135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.730296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.730323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.730477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.730502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.730677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.730702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.730835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.730862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.730989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.731014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.731159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.731185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.731317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.731343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.731471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.731497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.731663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.731689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.731842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.731866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.732023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.732049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.732249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.732287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.732429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.732469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.732666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.732693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.732850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.732876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.733045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.733070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.733200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.733226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.733382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.733408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.733561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.733586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.733767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.733793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.733930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.733957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.734112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.734138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.734266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.734293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.734456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.734481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.734609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.734633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.734783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.734809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.734945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.734974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.735125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.735164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.735321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.735347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.735499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.735524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.735655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.735681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.735811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.735836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.735984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.736014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.736145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.736173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.736301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.736329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.736482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.736508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.736689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.736715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.736864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.736890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.737016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.737043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.737203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.737230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.737363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.737388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.737522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.737547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.737677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.737702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.263 [2024-07-24 19:06:58.737878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.263 [2024-07-24 19:06:58.737903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.263 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.738035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.738060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.738192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.738220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.738357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.738383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.738544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.738570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.738752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.738778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.738935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.738960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.739115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.739142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.739274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.739300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.739453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.739478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.739605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.739630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.739809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.739834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.739966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.739992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.740144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.740170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.740328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.740353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.740481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.740506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.740627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.740656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.740832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.740857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.741009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.741033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.741169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.741194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.741342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.741367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.741549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.741573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.741695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.741720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.741869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.741894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.742013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.742038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.742161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.742186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.742320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.742345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.742496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.742520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.742645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.742670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.742851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.742879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.743065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.743091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.743230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.743256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.743410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.743436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.743566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.743593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.743720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.743747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.743880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.743906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.744065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.744091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.744255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.744281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.744407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.744432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.744582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.744609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.264 [2024-07-24 19:06:58.744758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.264 [2024-07-24 19:06:58.744784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.264 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.744948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.744976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.745113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.745139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.745288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.745318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.745444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.745469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.745615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.745640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.745815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.745840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.745957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.745982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.746117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.746143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.746271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.746296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.746424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.746448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.746574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.746599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.746750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.746775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.746908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.746937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.747088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.747118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.747251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.747277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.747404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.747431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.747566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.747593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.747719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.747745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.747872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.747899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.748079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.748110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.748263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.748289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.748444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.748469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.748601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.748627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.748803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.748830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.748958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.748985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.749125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.749150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.749299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.749324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.749476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.749501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.749649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.749674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.749825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.749854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.750002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.750029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.750154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.265 [2024-07-24 19:06:58.750182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.265 qpair failed and we were unable to recover it. 00:24:21.265 [2024-07-24 19:06:58.750335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.750361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.750515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.750541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.750677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.750704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.750863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.750891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.751013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.751040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.751222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.751249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.751376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.751400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.751574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.751599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.751746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.751771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.751903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.751929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.752061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.752088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.752252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.752279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.752454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.752480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.752629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.752655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.752807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.752834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.753012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.753037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.753167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.753194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.753348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.753373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.753504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.753530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.753652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.753677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.753855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.753880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.754025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.754050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.754212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.754238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.754393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.754419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.754552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.754584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.754735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.754760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.754936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.754961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.755084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.755114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.755271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.755295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.755448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.755473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.755588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.755613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.755778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.755802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.755926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.755951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.756081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.756111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.756264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.756290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.756412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.756437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.266 [2024-07-24 19:06:58.756575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.266 [2024-07-24 19:06:58.756599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.266 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.756743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.756768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.756925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.756950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.757066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.757091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.757233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.757258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.757410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.757435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.757552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.757576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.757716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.757741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.757919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.757943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.758066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.758091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.758268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.758293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.758449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.758474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.758614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.758638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.758791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.758816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.758960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.758984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.759141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.759171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.759322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.759347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.759499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.759524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.759647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.759672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.759800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.759828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.760016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.760041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.760164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.760190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.760343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.760367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.760494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.760519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.760670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.760695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.760867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.760907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.761069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.761097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.761255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.761281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.761462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.761489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.761637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.761663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.761791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.761817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.761970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.761996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.762131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.762156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.762309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.762334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.762485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.762511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.762667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.762692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.762838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.762863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.762990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.763017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.267 [2024-07-24 19:06:58.763165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.267 [2024-07-24 19:06:58.763191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.267 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.763371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.763396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.763572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.763598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.763727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.763753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.763932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.763963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.764122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.764149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.764299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.764324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.764476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.764500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.764633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.764658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.764778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.764802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.764928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.764953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.765117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.765143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.765288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.765313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.765440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.765465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.765590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.765614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.765766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.765791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.765948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.765972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.766100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.766130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.766282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.766307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.766462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.766487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.766663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.766688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.766836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.766861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.767009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.767033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.767174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.767200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.767331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.767356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.767503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.767528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.767684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.767709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.767828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.767852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.767978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.768002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.768189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.768238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.768381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.768411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.768565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.768596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.768727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.768753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.768880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.768908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.769057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.769083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.769268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.769294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.769468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.769493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.769640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.769665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.769842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.268 [2024-07-24 19:06:58.769866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.268 qpair failed and we were unable to recover it. 00:24:21.268 [2024-07-24 19:06:58.770014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.770038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.770167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.770192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.770369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.770394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.770544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.770569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.770729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.770753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.770904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.770928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.771079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.771111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.771246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.771271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.771420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.771444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.771567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.771592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.771717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.771742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.771894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.771918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.772040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.772064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.772198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.772224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.772372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.772396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.772516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.772540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.772671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.772695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.772856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.772881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.773030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.773055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.773209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.773239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.773395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.773420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.773569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.773594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.773719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.773743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.773888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.773913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.774066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.774092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.774229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.774254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.774379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.774404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.774574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.774599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.774721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.269 [2024-07-24 19:06:58.774746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.269 qpair failed and we were unable to recover it. 00:24:21.269 [2024-07-24 19:06:58.774898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.774922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.775074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.775100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.775238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.775265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.775418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.775444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.775600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.775625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.775814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.775839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.775965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.775989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.776117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.776141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.776294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.776319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.776438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.776463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.776620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.776644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.776794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.776818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.776967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.776991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.777147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.777173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.777323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.777348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.777468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.777493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.777642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.777666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.777799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.777823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.777977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.778003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.778177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.778202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.778323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.778348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.778465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.778490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.778668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.778692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.778822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.778846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.778972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.778996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.779144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.779169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.779293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.779317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.779468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.779492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.779616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.779641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.779820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.779845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.779974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.779999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.780146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.780171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.780322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.780346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.780500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.780524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.780678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.780703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.780827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.780851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.780998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.781023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.781178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.781203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.270 qpair failed and we were unable to recover it. 00:24:21.270 [2024-07-24 19:06:58.781336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.270 [2024-07-24 19:06:58.781360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.781507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.781532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.781686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.781710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.781863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.781887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.782056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.782095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.782265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.782294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.782415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.782441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.782604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.782631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.782791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.782817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.782971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.782997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.783135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.783161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.783285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.783309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.783460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.783484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.783611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.783638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.783816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.783840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.783986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.784010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.784136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.784162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.784315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.784339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.784496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.784520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.784640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.784665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.784796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.784820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.784971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.784994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.785137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.785162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.785284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.785307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.785436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.785461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.785616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.785641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.785795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.785819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.785939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.785964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.786088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.786119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.786272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.786297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.786427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.786451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.786578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.786603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.786753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.786778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.786929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.786954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.787085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.787115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.787268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.787292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.787427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.787451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.787627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.271 [2024-07-24 19:06:58.787652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.271 qpair failed and we were unable to recover it. 00:24:21.271 [2024-07-24 19:06:58.787778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.787802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.787928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.787953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.788078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.788108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.788284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.788308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.788486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.788510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.788657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.788681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.788808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.788832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.788963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.788987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.789171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.789197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.789318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.789345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.789464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.789487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.789618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.789642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.789769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.789794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.789913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.789937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.790086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.790117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.790274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.790298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.790422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.790445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.790622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.790646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.790763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.790789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.790964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.790988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.791140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.791165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.791317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.791342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.791484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.791508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.791671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.791695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.791842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.791868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.792024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.792048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.792197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.792222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.792354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.792379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.792532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.792557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.792708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.792731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.792882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.792905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.793035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.793060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.793217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.793241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.793394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.793418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.793545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.793570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.793693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.793716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.793868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.793896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.794013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.794038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.272 qpair failed and we were unable to recover it. 00:24:21.272 [2024-07-24 19:06:58.794188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.272 [2024-07-24 19:06:58.794213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.273 qpair failed and we were unable to recover it. 00:24:21.273 [2024-07-24 19:06:58.794369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.273 [2024-07-24 19:06:58.794393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.273 qpair failed and we were unable to recover it. 00:24:21.273 [2024-07-24 19:06:58.794514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.273 [2024-07-24 19:06:58.794539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.273 qpair failed and we were unable to recover it. 00:24:21.273 [2024-07-24 19:06:58.794668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.273 [2024-07-24 19:06:58.794692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.273 qpair failed and we were unable to recover it. 00:24:21.273 [2024-07-24 19:06:58.794824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.273 [2024-07-24 19:06:58.794849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.273 qpair failed and we were unable to recover it. 00:24:21.273 [2024-07-24 19:06:58.794971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.273 [2024-07-24 19:06:58.794995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.273 qpair failed and we were unable to recover it. 00:24:21.562 [2024-07-24 19:06:58.795124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.562 [2024-07-24 19:06:58.795149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.562 qpair failed and we were unable to recover it. 00:24:21.562 [2024-07-24 19:06:58.795274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.562 [2024-07-24 19:06:58.795298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.562 qpair failed and we were unable to recover it. 00:24:21.562 [2024-07-24 19:06:58.795499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.562 [2024-07-24 19:06:58.795522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.562 qpair failed and we were unable to recover it. 00:24:21.562 [2024-07-24 19:06:58.795647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.562 [2024-07-24 19:06:58.795671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.562 qpair failed and we were unable to recover it. 00:24:21.562 [2024-07-24 19:06:58.795819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.562 [2024-07-24 19:06:58.795843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.562 qpair failed and we were unable to recover it. 00:24:21.562 [2024-07-24 19:06:58.795968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.562 [2024-07-24 19:06:58.795992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.562 qpair failed and we were unable to recover it. 00:24:21.562 [2024-07-24 19:06:58.796149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.562 [2024-07-24 19:06:58.796175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.562 qpair failed and we were unable to recover it. 00:24:21.562 [2024-07-24 19:06:58.796299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.562 [2024-07-24 19:06:58.796323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.562 qpair failed and we were unable to recover it. 00:24:21.562 [2024-07-24 19:06:58.796479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.562 [2024-07-24 19:06:58.796502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.562 qpair failed and we were unable to recover it. 00:24:21.562 [2024-07-24 19:06:58.796622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.562 [2024-07-24 19:06:58.796647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.562 qpair failed and we were unable to recover it. 00:24:21.562 [2024-07-24 19:06:58.796774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.562 [2024-07-24 19:06:58.796798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.562 qpair failed and we were unable to recover it. 00:24:21.562 [2024-07-24 19:06:58.796951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.562 [2024-07-24 19:06:58.796976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.562 qpair failed and we were unable to recover it. 00:24:21.562 [2024-07-24 19:06:58.797107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.562 [2024-07-24 19:06:58.797134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.562 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.797257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.797281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.797400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.797425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.797555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.797578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.797706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.797731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.797854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.797878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.797997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.798021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.798145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.798177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.798308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.798332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.798460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.798485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.798616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.798641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.798794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.798819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.798939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.798963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.799090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.799120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.799284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.799310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.799439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.799463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.799599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.799624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.799749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.799773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.799923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.799947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.800078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.800108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.800237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.800262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.800434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.800472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.800609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.800638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.800792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.800817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.800950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.800976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.801130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.801157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.801311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.801337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.801482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.801507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.801679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.801706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.801884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.801916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.802069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.802093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.802244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.802269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.802393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.802429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.802592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.802617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.802797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.802828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.802949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.802974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.803106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.803132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.803287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.563 [2024-07-24 19:06:58.803311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.563 qpair failed and we were unable to recover it. 00:24:21.563 [2024-07-24 19:06:58.803443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.803470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.803603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.803629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.803774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.803799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.803971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.803997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.804166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.804193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.804353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.804380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.804509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.804535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.804687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.804714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.804893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.804922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.805079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.805113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.805268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.805294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.805418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.805454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.805613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.805638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.805801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.805828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.805956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.805981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.806113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.806141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.806272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.806297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.806422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.806448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.806573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.806598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.806726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.806751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.806873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.806900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.807030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.807056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.807228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.807255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.807399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.807437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.807596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.807622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.807756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.807780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.807908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.807934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.808112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.808138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.808287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.808311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.808435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.808459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.808608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.808633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.808762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.808786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.808938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.808963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.809130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.809156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.809306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.809330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.809471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.809496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.809648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.809674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.564 [2024-07-24 19:06:58.809812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.564 [2024-07-24 19:06:58.809836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.564 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.809987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.810011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.810162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.810188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.810310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.810334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.810467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.810492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.810616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.810640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.810760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.810785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.810932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.810956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.811112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.811137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.811283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.811308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.811452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.811476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.811628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.811652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.811778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.811802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.811957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.811986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.812165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.812189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.812314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.812339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.812487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.812511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.812657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.812681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.812858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.812883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.813005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.813029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.813154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.813178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.813308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.813334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.813483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.813507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.813662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.813687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.813841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.813866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.814015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.814039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.814182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.814207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.814329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.814354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.814479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.814504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.814634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.814658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.814807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.814831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.814980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.815005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.815134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.815160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.815279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.815304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.815426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.815451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.815609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.815634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.815759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.815785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.815934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.815959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.565 qpair failed and we were unable to recover it. 00:24:21.565 [2024-07-24 19:06:58.816110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.565 [2024-07-24 19:06:58.816136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.816314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.816338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.816466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.816495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.816627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.816652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.816802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.816827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.816951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.816977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.817136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.817162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.817315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.817340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.817498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.817523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.817699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.817723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.817842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.817867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.817999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.818024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.818182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.818208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.818353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.818378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.818508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.818532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.818664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.818688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.818812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.818838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.818989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.819014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.819166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.819191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.819325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.819349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.819474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.819498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.819648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.819673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.819816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.819841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.820000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.820025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.820178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.820203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.820324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.820349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.820477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.820501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.820634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.820658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.820811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.820836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.820989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.821017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.821149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.821176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.821336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.821360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.821519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.821543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.821687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.821712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.821834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.821858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.822003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.822028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.822205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.822231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.566 [2024-07-24 19:06:58.822355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.566 [2024-07-24 19:06:58.822379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.566 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.822554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.822579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.822760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.822784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.822962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.822987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.823148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.823173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.823324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.823348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.823510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.823535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.823686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.823710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.823857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.823882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.824018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.824042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.824223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.824249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.824373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.824397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.824579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.824604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.824737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.824762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.824908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.824932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.825082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.825119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.825247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.825272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.825423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.825449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.825577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.825602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.825734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.825759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.825891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.825917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.826072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.826097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.826273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.826299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.826473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.826498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.826626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.826651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.826773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.826798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.826921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.826946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.827108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.827134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.827285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.827311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.827443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.827467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.567 [2024-07-24 19:06:58.827598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.567 [2024-07-24 19:06:58.827622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.567 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.827755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.827780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.827931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.827955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.828092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.828125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.828299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.828324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.828449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.828474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.828606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.828630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.828799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.828824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.828941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.828966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.829142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.829168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.829298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.829323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.829455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.829479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.829632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.829657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.829811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.829836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.829958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.829982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.830130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.830155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.830332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.830357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.830483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.830509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.830656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.830681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.830809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.830834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.830989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.831014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.831168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.831194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.831348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.831373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.831526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.831551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.831704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.831729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.831883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.831908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.832038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.832062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.832188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.832224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.832343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.832368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.832536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.832560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.832709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.832737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.832892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.832917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.833061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.833086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.833234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.833260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.833434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.833459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.833585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.833610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.833758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.833782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.833911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.833936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.568 qpair failed and we were unable to recover it. 00:24:21.568 [2024-07-24 19:06:58.834069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.568 [2024-07-24 19:06:58.834093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.834250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.834275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.834425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.834449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.834599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.834624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.834783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.834808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.834927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.834953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.835118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.835144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.835268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.835293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.835446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.835471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.835620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.835645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.835800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.835824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.835998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.836023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.836146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.836171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.836297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.836322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.836469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.836494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.836623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.836647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.836771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.836796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.836918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.836943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.837072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.837097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.837227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.837255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.837379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.837404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.837582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.837607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.837757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.837782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.837910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.837935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.838057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.838081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.838220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.838245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.838374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.838399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.838525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.838550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.838669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.838693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.838837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.838861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.838984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.839008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.839156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.839182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.839302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.839326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.839492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.839517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.839660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.839684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.839833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.839858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.840031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.840056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.840181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.840206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.569 qpair failed and we were unable to recover it. 00:24:21.569 [2024-07-24 19:06:58.840359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.569 [2024-07-24 19:06:58.840384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.840541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.840566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.840712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.840737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.840908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.840933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.841060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.841086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.841276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.841302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.841469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.841494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.841643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.841668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.841804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.841829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.841984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.842008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.842167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.842192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.842340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.842365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.842488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.842513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.842644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.842670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.842822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.842847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.842996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.843021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.843149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.843175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.843310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.843335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.843488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.843512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.843684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.843708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.843829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.843854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.844008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.844032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.844180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.844206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.844362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.844387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.844538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.844562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.844678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.844703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.844830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.844854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.844982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.845008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.845168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.845194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.845350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.845374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.845531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.845556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.845678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.845703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.845825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.845850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.845980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.846004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.846161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.846187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.846319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.846344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.846501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.846527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.846699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.846724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.570 qpair failed and we were unable to recover it. 00:24:21.570 [2024-07-24 19:06:58.846847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.570 [2024-07-24 19:06:58.846872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.847006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.847030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.847150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.847176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.847353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.847378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.847528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.847553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.847709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.847734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.847848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.847873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.848020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.848044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.848180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.848205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.848381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.848406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.848564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.848589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.848740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.848769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.848920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.848945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.849120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.849145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.849297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.849324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.849479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.849503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.849654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.849679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.849825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.849849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.849978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.850005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.850184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.850210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.850342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.850367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.850488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.850513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.850667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.850692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.850841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.850865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.850987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.851012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.851177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.851203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.851328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.851353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.851499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.851524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.851702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.851727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.851852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.851877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.852062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.852087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.852219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.852244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.852372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.852397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.852549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.852573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.852727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.852752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.571 qpair failed and we were unable to recover it. 00:24:21.571 [2024-07-24 19:06:58.852882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.571 [2024-07-24 19:06:58.852907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.853061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.853086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.853214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.853238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.853388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.853417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.853568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.853593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.853747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.853773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.853901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.853926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.854050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.854076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.854238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.854263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.854393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.854418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.854546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.854570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.854692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.854717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.854894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.854919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.855097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.855130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.855282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.855307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.855457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.855482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.855594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.855619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.855748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.855773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.855926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.855950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.856096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.856129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.856277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.856302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.856432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.856457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.856585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.856611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.856768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.856794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.856911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.856935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.857087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.857131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.857279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.857303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.857481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.857505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.857657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.857682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.857837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.857862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.858034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.858062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.858222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.858248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.858399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.858424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.858557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.858581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.572 [2024-07-24 19:06:58.858731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.572 [2024-07-24 19:06:58.858756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.572 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.858879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.858904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.859053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.859078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.859243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.859268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.859401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.859426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.859572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.859596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.859728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.859753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.859905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.859930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.860077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.860108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.860231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.860256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.860408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.860432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.860604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.860629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.860755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.860780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.860907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.860932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.861088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.861130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.861276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.861301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.861423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.861449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.861606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.861630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.861776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.861801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.861933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.861957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.862113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.862139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.862264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.862289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.862417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.862441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.862598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.862624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.862781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.862806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.862960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.862984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.863141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.863168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.863286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.863312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.863487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.863512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.863664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.863689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.863833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.863858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.864011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.864036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.864164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.864190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.864342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.864367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.864520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.864545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.864678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.864703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.864832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.864858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.865003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.865028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.573 [2024-07-24 19:06:58.865176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.573 [2024-07-24 19:06:58.865202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.573 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.865350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.865374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.865528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.865553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.865708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.865733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.865859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.865883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.866061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.866086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.866211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.866236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.866416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.866440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.866595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.866620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.866770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.866795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.866915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.866940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.867088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.867120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.867247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.867272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.867399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.867424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.867553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.867579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.867755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.867780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.867957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.867982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.868118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.868145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.868277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.868302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.868433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.868459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.868593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.868618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.868768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.868793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.868915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.868940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.869089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.869122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.869271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.869295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.869449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.869473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.869600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.869628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.869803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.869827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.869960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.869986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.870114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.870140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.870316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.870341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.870497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.870521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.870652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.870678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.870827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.870852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.871006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.871031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.871183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.871208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.871332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.871357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.871510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.871534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.574 qpair failed and we were unable to recover it. 00:24:21.574 [2024-07-24 19:06:58.871686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-24 19:06:58.871710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.871835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.871861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.872016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.872041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.872193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.872218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.872367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.872391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.872519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.872545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.872680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.872705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.872858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.872882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.873034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.873059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.873189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.873215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.873344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.873368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.873519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.873544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.873696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.873721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.873847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.873872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.874026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.874051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.874218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.874250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.874401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.874426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.874543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.874568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.874717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.874742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.874897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.874922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.875065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.875089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.875222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.875247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.875400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.875424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.875595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.875620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.875749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.875774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.875921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.875948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.876094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.876124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.876279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.876304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.876417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.876442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.876571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.876596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.876750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.876775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.876922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.876947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.877072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.877097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.877279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.877304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.877459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.877483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.877614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.877639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.877794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.877819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.877938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.877963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.575 qpair failed and we were unable to recover it. 00:24:21.575 [2024-07-24 19:06:58.878087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.575 [2024-07-24 19:06:58.878119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.878272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.878297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.878455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.878479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.878635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.878659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.878785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.878812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.878990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.879015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.879163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.879189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.879316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.879341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.879472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.879496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.879647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.879672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.879832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.879856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.880039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.880063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.880214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.880240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.880370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.880396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.880517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.880543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.880669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.880694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.880841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.880866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.881045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.881070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.881211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.881237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.881392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.881420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.881571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.881596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.881749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.881775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.881894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.881918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.882095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.882137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.882263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.882287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.882423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.882448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.882593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.882617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.882797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.882821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.882967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.882992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.883147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.883172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.883325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.883350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.883500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.883525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.576 [2024-07-24 19:06:58.883660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.576 [2024-07-24 19:06:58.883686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.576 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.883818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.883843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.883994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.884020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.884061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d4230 (9): Bad file descriptor 00:24:21.577 [2024-07-24 19:06:58.884294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.884334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.884495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.884523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.884654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.884680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.884831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.884856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.885016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.885041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.885172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.885198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.885349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.885375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.885525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.885550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.885701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.885725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.885874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.885899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.886072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.886099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.886239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.886264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.886422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.886448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.886599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.886625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.886780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.886806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.886987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.887012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.887162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.887187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.887335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.887359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.887514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.887539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.887722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.887747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.887897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.887922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.888074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.888099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.888257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.888281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.888416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.888442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.888618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.888643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.888791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.888815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.888943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.888970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.889114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.889140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.889265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.889291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.889443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.889469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.889628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.889652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.889830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.889855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.889976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.890003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.890162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.890188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.577 [2024-07-24 19:06:58.890330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.577 [2024-07-24 19:06:58.890354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.577 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.890511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.890537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.890655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.890684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.890852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.890877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.891052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.891078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.891238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.891277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.891458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.891484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.891611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.891637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.891764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.891789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.891950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.891974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.892098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.892131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.892250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.892275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.892426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.892451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.892596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.892621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.892746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.892773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.892899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.892925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.893113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.893139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.893275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.893301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.893453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.893478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.893627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.893651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.893800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.893825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.893978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.894004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.894163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.894189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.894341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.894367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.894493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.894519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.894670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.894695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.894825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.894850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.895003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.895028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.895162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.895188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.895377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.895416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.895576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.895603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.895779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.895805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.895931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.895955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.896112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.896138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.896269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.896293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.896440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.896464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.896612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.896636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.896770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.896795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-24 19:06:58.896925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.578 [2024-07-24 19:06:58.896953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.897089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.897120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.897270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.897294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.897469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.897495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.897650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.897676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.897831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.897856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.898036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.898063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.898203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.898229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.898359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.898385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.898506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.898531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.898678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.898702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.898830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.898855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.899007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.899035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.899197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.899223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.899374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.899399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.899549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.899574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.899730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.899755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.899928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.899953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.900128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.900167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.900303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.900329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.900480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.900505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.900693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.900718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.900845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.900870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.900999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.901024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.901159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.901185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.901317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.901342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.901488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.901513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.901667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.901692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.901847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.901872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.901999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.902023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.902212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.902251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.902410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.902437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.902595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.902621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.902771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.902795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.902970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.902995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.903123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.903150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.903327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.903351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.903506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.579 [2024-07-24 19:06:58.903530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-24 19:06:58.903656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.903681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.903853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.903877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.903995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.904019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.904145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.904172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.904293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.904317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.904472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.904497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.904623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.904648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.904824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.904849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.904996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.905021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.905177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.905203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.905331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.905356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.905496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.905521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.905675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.905701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.905850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.905875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.905999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.906024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.906151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.906177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.906334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.906358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.906503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.906528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.906680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.906705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.906831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.906856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.907007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.907035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.907162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.907187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.907344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.907370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.907549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.907574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.907725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.907750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.907903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.907929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.908086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.908129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.908265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.908290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.908449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.908473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.908599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.908625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.908776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.908800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.908920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.908944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.909122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.909147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.909304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.909329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.909484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.909508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.909663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.909688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.909841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.580 [2024-07-24 19:06:58.909865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.580 qpair failed and we were unable to recover it. 00:24:21.580 [2024-07-24 19:06:58.910041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.910065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.910236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.910262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.910414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.910439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.910560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.910585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.910707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.910731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.910884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.910909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.911063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.911088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.911245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.911270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.911394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.911419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.911596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.911622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.911761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.911785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.911935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.911960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.912083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.912115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.912240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.912265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.912449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.912474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.912623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.912649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.912771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.912797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.912951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.912975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.913177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.913216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.913350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.913376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.913512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.913537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.913688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.913713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.913855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.913880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.914014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.914044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.914198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.914226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.914404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.914430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.914556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.914581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.914714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.914738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.914890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.914915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.915035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.915059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.915206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.915245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.915429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.581 [2024-07-24 19:06:58.915456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.581 qpair failed and we were unable to recover it. 00:24:21.581 [2024-07-24 19:06:58.915617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.915642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.915789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.915814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.915948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.915975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.916134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.916159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.916289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.916314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.916468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.916493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.916646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.916673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.916816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.916841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.917028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.917053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.917193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.917219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.917343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.917368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.917497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.917522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.917651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.917676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.917794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.917819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.917966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.917991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.918117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.918142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.918294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.918319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.918470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.918495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.918625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.918654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.918807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.918832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.919008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.919033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.919181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.919206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.919341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.919365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.919514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.919539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.919668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.919692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.919876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.919903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.920059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.920085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.920223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.920248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.920391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.920416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.920537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.920563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.920709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.920733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.920886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.920911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.921069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.921094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.921256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.921281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.921407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.921432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.921585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.921610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.921733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.921758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.582 [2024-07-24 19:06:58.921913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.582 [2024-07-24 19:06:58.921938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.582 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.922089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.922121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.922300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.922324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.922476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.922501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.922684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.922708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.922864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.922888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.923014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.923041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.923195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.923221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.923374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.923403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.923556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.923581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.923708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.923733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.923915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.923940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.924094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.924125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.924280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.924307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.924433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.924459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.924609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.924633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.924778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.924803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.924949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.924974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.925125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.925150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.925303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.925327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.925450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.925475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.925602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.925627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.925805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.925830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.925957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.925982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.926150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.926190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.926332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.926360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.926487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.926513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.926642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.926669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.926830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.926855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.927001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.927027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.927181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.927207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.927332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.927357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.927484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.927509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.927662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.927687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.927846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.927871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.928024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.928053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.928185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.928213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.928347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.928372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.583 [2024-07-24 19:06:58.928490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.583 [2024-07-24 19:06:58.928516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.583 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.928642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.928668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.928835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.928860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.929035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.929059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.929209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.929235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.929366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.929392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.929550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.929575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.929702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.929727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.929880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.929905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.930044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.930068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.930197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.930222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.930357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.930382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.930533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.930557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.930725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.930750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.930901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.930926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.931075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.931100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.931244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.931269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.931426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.931451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.931600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.931625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.931777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.931802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.931945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.931970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.932099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.932131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.932257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.932282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.932434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.932459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.932589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.932617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.932792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.932816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.932969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.932996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.933180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.933206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.933335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.933360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.933512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.933537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.933687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.933712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.933892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.933917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.934066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.934091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.934273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.934298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.934469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.934494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.934615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.934640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.934777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.934802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.934924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.934949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.584 qpair failed and we were unable to recover it. 00:24:21.584 [2024-07-24 19:06:58.935127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.584 [2024-07-24 19:06:58.935152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.935308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.935332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.935486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.935510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.935634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.935659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.935818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.935842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.935995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.936020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.936187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.936212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.936337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.936362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.936535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.936559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.936708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.936733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.936882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.936906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.937054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.937078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.937212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.937237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.937372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.937395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.937557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.937580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.937754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.937779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.937932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.937957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.938076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.938107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.938233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.938257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.938378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.938402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.938580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.938605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.938737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.938761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.938893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.938916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.939046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.939071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.939231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.939257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.939385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.939411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.939567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.939593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.939749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.939774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.939945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.939969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.940094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.940127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.940303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.940328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.940454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.940478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.940626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.940651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.940800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.940825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.940978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.941001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.941139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.585 [2024-07-24 19:06:58.941164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.585 qpair failed and we were unable to recover it. 00:24:21.585 [2024-07-24 19:06:58.941315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.941340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.941518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.941542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.941686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.941711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.941858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.941883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.942025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.942049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.942184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.942209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.942361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.942386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.942555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.942579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.942700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.942725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.942872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.942896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.943024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.943048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.943207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.943232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.943389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.943414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.943561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.943585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.943729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.943753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.943907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.943932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.944079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.944110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.944293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.944318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.944461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.944489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.944633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.944657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.948255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.948294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.948436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.948463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.948645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.948671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.948852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.948876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.949053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.949078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.949207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.949232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.949384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.949408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.949557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.949580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.949747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.949771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.949894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.949918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.950067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.950091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.950256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.950280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.950421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.950446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.586 qpair failed and we were unable to recover it. 00:24:21.586 [2024-07-24 19:06:58.950622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.586 [2024-07-24 19:06:58.950646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.950795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.950820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.950973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.950998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.951154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.951179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.951357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.951382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.951532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.951557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.951709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.951733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.951910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.951934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.952090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.952121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.952267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.952292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.952443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.952469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.952646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.952670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.952825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.952855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.952988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.953014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.953151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.953176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.953304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.953330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.953479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.953503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.953650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.953675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.953826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.953851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.954028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.954053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.954205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.954229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.954384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.954409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.954560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.954585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.954713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.954737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.954862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.954887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.955041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.955065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.955233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.955258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.955387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.955412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.955591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.955616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.955766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.955790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.955949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.955973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.956124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.956150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.956324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.956349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.956505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.956528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.956690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.956715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.956848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.956873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.956990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.957014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.957163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.587 [2024-07-24 19:06:58.957189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.587 qpair failed and we were unable to recover it. 00:24:21.587 [2024-07-24 19:06:58.957314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.957340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.957468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.957493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.957640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.957664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.957791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.957817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.957966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.957990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.958165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.958191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.958342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.958366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.958514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.958539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.958691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.958715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.958866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.958891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.959061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.959086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.959246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.959271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.959422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.959446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.959611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.959635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.959764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.959789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.959946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.959972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.960133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.960160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.960312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.960336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.960467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.960492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.960639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.960665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.960812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.960836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.961005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.961030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.961178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.961204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.961323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.961347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.961473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.961498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.961621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.961646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.961795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.961819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.961998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.962023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.962152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.962177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.962336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.962361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.962486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.962511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.962639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.962664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.962821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.962846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.962961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.962986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.963128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.963152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.963299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.963324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.963478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.963503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.963636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.588 [2024-07-24 19:06:58.963661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.588 qpair failed and we were unable to recover it. 00:24:21.588 [2024-07-24 19:06:58.963837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.963862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.963984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.964009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.964136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.964162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.964314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.964338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.964465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.964493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.964646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.964671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.964823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.964847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.964996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.965020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.965148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.965174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.965350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.965375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.965529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.965554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.965710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.965735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.965860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.965885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.966033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.966057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.966186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.966211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.966337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.966361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.966482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.966505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.966663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.966687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.966847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.966872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.967023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.967048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.967175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.967199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.967368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.967392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.967520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.967545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.967672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.967697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.967863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.967888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.968012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.968037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.968181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.968206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.968324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.968348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.968507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.968531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.968681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.968705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.968859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.968884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.969017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.969045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.969170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.969194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.969327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.969352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.969514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.969538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.969692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.969716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.969871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.969896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.970051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.970076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.589 qpair failed and we were unable to recover it. 00:24:21.589 [2024-07-24 19:06:58.970223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.589 [2024-07-24 19:06:58.970248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.970416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.970440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.970593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.970618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.970770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.970794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.970958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.970982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.971141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.971166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.971343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.971367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.971498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.971523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.971676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.971700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.971848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.971873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.972048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.972073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.972252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.972277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.972431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.972456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.972601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.972625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.972756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.972780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.972910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.972934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.973088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.973119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.973270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.973294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.973444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.973469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.973620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.973645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.973772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.973801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.973948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.973972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.974125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.974150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.974271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.974294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.974426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.974450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.974628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.974652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.974804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.974828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.974979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.975004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.975134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.975160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.975307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.975331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.975451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.975475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.975596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.590 [2024-07-24 19:06:58.975623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.590 qpair failed and we were unable to recover it. 00:24:21.590 [2024-07-24 19:06:58.975745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.975769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.975925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.975950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.976121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.976159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.976317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.976342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.976494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.976519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.976640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.976664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.976791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.976816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.976967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.976992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.977128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.977154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.977308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.977332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.977483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.977507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.977631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.977655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.977784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.977807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.977963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.977987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.978118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.978143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.978299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.978323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.978481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.978506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.978628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.978652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.978821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.978846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.978968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.978993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.979126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.979152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.979307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.979332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.979472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.979497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.979667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.979691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.979873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.979899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.980053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.980078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.980250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.980276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.980403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.980427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.980577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.980602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.980760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.980784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.980933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.980956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.981114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.981139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.981318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.981342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.981473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.981497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.981624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.981649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.981805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.981829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.981962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.981986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.982141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.591 [2024-07-24 19:06:58.982167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.591 qpair failed and we were unable to recover it. 00:24:21.591 [2024-07-24 19:06:58.982288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.982312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.982460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.982484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.982639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.982664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.982816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.982840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.982966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.982990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.983169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.983195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.983349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.983374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.983525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.983549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.983704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.983729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.983854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.983879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.984028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.984052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.984203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.984228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.984353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.984378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.984532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.984556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.984680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.984704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.984880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.984905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.985054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.985078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.985238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.985262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.985442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.985471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.985653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.985678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.985796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.985820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.985967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.985991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.986142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.986168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.986306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.986331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.986480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.986506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.986656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.986680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.986854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.986878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.987034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.987058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.987214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.987239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.987420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.987444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.987565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.987589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.987717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.987742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.987873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.987897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.988047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.988071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.988215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.988240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.988417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.988441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.988617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.988641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.988797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.592 [2024-07-24 19:06:58.988822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.592 qpair failed and we were unable to recover it. 00:24:21.592 [2024-07-24 19:06:58.988975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.989000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.989127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.989152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.989302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.989326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.989478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.989503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.989629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.989654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.989810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.989835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.989958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.989983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.990143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.990172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.990325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.990350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.990527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.990552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.990724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.990751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.990890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.990918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.991065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.991090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.991268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.991294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.991422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.991449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.991588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.991613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.991763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.991787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.991940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.991982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.992137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.992163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.992296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.992340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.992537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.992561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.992682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.992707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.992860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.992902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.993032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.993059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.993241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.993267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.993395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.993420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.993566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.993590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.993739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.993763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.993939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.993966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.994133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.994174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.994328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.994352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.994494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.994518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.994674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.994699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.994877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.994901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.995072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.995100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.995267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.995291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.995441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.995466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.995595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.995619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.593 qpair failed and we were unable to recover it. 00:24:21.593 [2024-07-24 19:06:58.995776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.593 [2024-07-24 19:06:58.995801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:58.995948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:58.995972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:58.996139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:58.996168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:58.996328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:58.996355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:58.996548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:58.996572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:58.996706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:58.996731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:58.996857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:58.996880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:58.997031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:58.997056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:58.997253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:58.997282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:58.997423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:58.997450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:58.997634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:58.997659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:58.997815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:58.997840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:58.997963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:58.997988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:58.998136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:58.998161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:58.998312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:58.998337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:58.998527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:58.998552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:58.998731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:58.998755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:58.998878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:58.998902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:58.999085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:58.999115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:58.999273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:58.999298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:58.999497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:58.999525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:58.999671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:58.999698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:58.999872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:58.999896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:59.000029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:59.000054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:59.000215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:59.000241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:59.000396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:59.000421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:59.000549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:59.000574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:59.000723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:59.000748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:59.000876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:59.000901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:59.001046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:59.001072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:59.001236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:59.001261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:59.001424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:59.001449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:59.001600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:59.001624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:59.001773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:59.001798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:59.001953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:59.001977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:59.002155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:59.002181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:59.002357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:59.002382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:59.002530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:59.002558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.594 qpair failed and we were unable to recover it. 00:24:21.594 [2024-07-24 19:06:59.002708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.594 [2024-07-24 19:06:59.002732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.002885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.002927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.003121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.003164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.003292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.003316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.003494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.003521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.003686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.003711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.003843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.003867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.004017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.004041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.004173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.004198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.004327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.004367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.004532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.004561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.004743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.004768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.004941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.004968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.005144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.005174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.005321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.005345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.005493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.005517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.005669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.005694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.005882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.005906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.006079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.006115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.006250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.006279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.006433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.006457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.006606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.006629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.006757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.006781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.006931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.006956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.007100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.007150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.007293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.007320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.007473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.007502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.007681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.007721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.007890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.007918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.008060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.008085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.008289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.008317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.008499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.008523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.595 [2024-07-24 19:06:59.008672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.595 [2024-07-24 19:06:59.008698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.595 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.008853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.008878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.009053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.009077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.009239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.009265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.009393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.009434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.009610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.009634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.009790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.009814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.009972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.009997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.010154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.010180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.010331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.010355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.010514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.010538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.010687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.010712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.010889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.010914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.011067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.011091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.011293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.011321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.011497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.011522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.011674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.011699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.011823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.011847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.011969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.011994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.012195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.012223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.012380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.012404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.012578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.012609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.012781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.012810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.012945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.012972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.013123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.013149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.013276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.013300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.013496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.013522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.013705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.013731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.013907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.013934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.014127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.014156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.014333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.014357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.014508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.014549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.014741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.014769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.014955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.014980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.015127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.015152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.015308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.015332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.015484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.015509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.015703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.015730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.015903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.596 [2024-07-24 19:06:59.015928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.596 qpair failed and we were unable to recover it. 00:24:21.596 [2024-07-24 19:06:59.016081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.016111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.016287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.016315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.016479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.016507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.016685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.016709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.016864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.016905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.017078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.017111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.017312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.017336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.017525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.017549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.017725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.017749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.017901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.017926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.018077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.018128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.018302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.018327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.018481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.018507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.018701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.018729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.018921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.018946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.019067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.019091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.019259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.019284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.019456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.019485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.019629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.019654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.019827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.019851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.020007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.020035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.020202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.020228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.020374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.020414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.020588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.020612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.020762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.020787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.020945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.020972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.021136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.021163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.021303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.021327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.021466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.021491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.021615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.021640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.021793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.021818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.021942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.021966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.022096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.022127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.022318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.022343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.022517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.022543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.022727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.022751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.022903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.022929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.023137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.597 [2024-07-24 19:06:59.023166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.597 qpair failed and we were unable to recover it. 00:24:21.597 [2024-07-24 19:06:59.023359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.023384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.023514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.023538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.023706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.023734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.023932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.023959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.024141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.024167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.024321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.024346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.024478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.024503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.024630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.024655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.024806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.024831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.024981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.025021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.025194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.025220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.025372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.025414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.025582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.025613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.025780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.025805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.025959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.025984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.026113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.026138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.026286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.026310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.026484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.026509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.026658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.026682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.026834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.026858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.027014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.027039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.027243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.027268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.027387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.027411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.027563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.027587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.027795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.027822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.028018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.028042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.028179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.028204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.028329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.028353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.028540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.028565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.028692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.028716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.028846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.028870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.029021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.029046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.029242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.029270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.029468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.029492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.029664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.029688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.029844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.029869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.030044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.030069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.030208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.598 [2024-07-24 19:06:59.030233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.598 qpair failed and we were unable to recover it. 00:24:21.598 [2024-07-24 19:06:59.030389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.030414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.030572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.030601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.030777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.030801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.030949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.030975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.031148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.031174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.031351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.031375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.031542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.031568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.031751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.031775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.031925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.031950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.032099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.032149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.032351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.032379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.032563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.032587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.032736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.032760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.032904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.032932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.033130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.033157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.033296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.033320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.033448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.033473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.033668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.033693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.033813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.033854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.034030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.034055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.034210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.034235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.034424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.034452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.034655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.034680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.034829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.034853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.035001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.035026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.035156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.035182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.035360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.035384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.035556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.035583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.035711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.035738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.035913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.035939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.036084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.036114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.036299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.036327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.036462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.036488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.036615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.036639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.036766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.599 [2024-07-24 19:06:59.036792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.599 qpair failed and we were unable to recover it. 00:24:21.599 [2024-07-24 19:06:59.036970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.036998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.037178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.037203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.037334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.037358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.037489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.037513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.037640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.037664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.037820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.037844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.037973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.037999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.038157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.038182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.038389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.038415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.038621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.038645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.038795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.038823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.038993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.039021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.039196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.039221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.039395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.039436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.039585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.039610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.039761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.039785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.039937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.039962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.040134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.040160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.040317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.040341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.040538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.040566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.040750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.040777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.040958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.040984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.041146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.041172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.041321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.041363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.041536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.041561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.041758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.041786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.041957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.041983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.042163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.042188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.042346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.042371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.042497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.042521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.042680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.042704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.042830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.042855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.043060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.043087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.043260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.043285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.043408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.043436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.043609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.043637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.043806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.043831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.600 qpair failed and we were unable to recover it. 00:24:21.600 [2024-07-24 19:06:59.043951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.600 [2024-07-24 19:06:59.043975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.044180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.044205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.044381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.044405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.044601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.044628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.044772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.044798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.044963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.044987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.045118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.045144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.045292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.045333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.045502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.045526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.045696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.045737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.045927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.045952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.046111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.046137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.046275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.046302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.046506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.046533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.046682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.046707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.046864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.046888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.047045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.047069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.047228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.047255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.047396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.047424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.047596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.047620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.047799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.047823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.048024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.048052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.048211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.048237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.048390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.048415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.048548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.048576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.048728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.048753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.048904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.048929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.049056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.049080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.049241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.049269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.049469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.049493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.049627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.049651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.049776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.049801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.049990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.050015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.050211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.050240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.050405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.050433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.050612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.050637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.050792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.050816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.050940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.050964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.051126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.051152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.051294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.051322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.051484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.051511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.601 [2024-07-24 19:06:59.051709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.601 [2024-07-24 19:06:59.051733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.601 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.051893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.051920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.052093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.052123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.052250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.052275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.052401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.052424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.052602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.052626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.052781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.052806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.052964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.052988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.053137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.053163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.053337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.053362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.053535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.053566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.053704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.053731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.053904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.053929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.054063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.054086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.054250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.054290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.054468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.054492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.054669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.054693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.054859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.054885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.055037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.055061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.055229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.055254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.055379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.055404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.055580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.055604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.055797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.055824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.055984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.056011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.056218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.056245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.056393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.056418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.056573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.056597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.056759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.056784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.056932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.056957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.057088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.057119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.057267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.057293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.057442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.057467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.057616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.057642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.057816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.057841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.057989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.058013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.058199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.058226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.058383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.058407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.058560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.058584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.058714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.058737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.058888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.058913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.059064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.059111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.059285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.602 [2024-07-24 19:06:59.059312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.602 qpair failed and we were unable to recover it. 00:24:21.602 [2024-07-24 19:06:59.059469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.059492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.059616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.059641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.059845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.059872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.060025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.060050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.060181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.060221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.060381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.060408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.060566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.060591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.060743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.060768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.060953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.060979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.061151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.061176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.061350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.061379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.061516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.061543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.061689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.061714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.061868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.061893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.062043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.062085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.062287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.062312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.062467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.062509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.062660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.062685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.062813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.062838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.062990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.063032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.063216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.063243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.063392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.063418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.063570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.063595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.063761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.063788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.063955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.063980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.064146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.064174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.064306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.064333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.064507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.064531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.064660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.064701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.064862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.064889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.065065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.065090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.065266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.065295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.065440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.065467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.065642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.065667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.065834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.065862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.066033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.066058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.066217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.066246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.066366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.066391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.066533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.066557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.066702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.066726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.066850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.603 [2024-07-24 19:06:59.066874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.603 qpair failed and we were unable to recover it. 00:24:21.603 [2024-07-24 19:06:59.067089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.067120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.067245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.067270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.067400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.067439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.067632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.067659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.067845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.067870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.068030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.068057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3252404 Killed "${NVMF_APP[@]}" "$@" 00:24:21.604 [2024-07-24 19:06:59.068275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.068301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.068456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.068481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.068619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.068644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:21.604 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:21.604 [2024-07-24 19:06:59.068795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.068835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:21.604 [2024-07-24 19:06:59.068987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.069011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:21.604 [2024-07-24 19:06:59.069162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.069188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.069340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.069381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.069532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.069556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.069702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.069726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.069877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.069919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.070106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.070131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.070291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.070315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.070494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.070519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.070642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.070667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.070796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.070837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.071019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.071044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.071215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.071241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.071410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.071437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.071602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.071628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.071780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.071805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.071965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.072009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.072174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.072202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.072376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.072401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.072528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.072570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.604 [2024-07-24 19:06:59.072716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.604 [2024-07-24 19:06:59.072744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.604 qpair failed and we were unable to recover it. 00:24:21.605 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3252966 00:24:21.605 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:21.605 [2024-07-24 19:06:59.072920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3252966 00:24:21.605 [2024-07-24 19:06:59.072944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.073124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.073152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 wit 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3252966 ']' 00:24:21.605 h addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.605 [2024-07-24 19:06:59.073330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.073358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:21.605 [2024-07-24 19:06:59.073517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.073542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.605 [2024-07-24 19:06:59.073712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:21.605 [2024-07-24 19:06:59.073740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:21.605 [2024-07-24 19:06:59.073909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.073936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.074188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.074212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.074371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.074414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.074592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.074617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.074755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.074780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.074933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.074958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.075114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.075163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.075343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.075368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.075553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.075580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.075751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.075777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.075972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.075996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.076190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.076218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.076413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.076441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.076592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.076616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.076804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.076831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.077027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.077053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.077231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.077256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.077382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.077423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.077577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.077602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.077753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.077778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.077959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.077987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.078124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.078151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.078324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.078349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.078505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.078533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.078722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.078750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.078900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.078926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.079078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.079109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.079254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.079279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.079458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.079482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.079609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.079633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.079790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.079831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.080022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.080049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.080235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.080260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.080408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.080435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.605 [2024-07-24 19:06:59.080624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.605 [2024-07-24 19:06:59.080649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.605 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.080802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.080842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.080982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.081009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.081177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.081202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.081326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.081366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.081525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.081553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.081698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.081722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.081865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.081889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.082016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.082041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.082168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.082193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.082325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.082351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.082468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.082493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.082643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.082669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.082840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.082870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.083037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.083064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.083248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.083273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.083393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.083436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.083604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.083632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.083786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.083810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.083935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.083959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.084133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.084176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.084310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.084334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.084464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.084488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.084619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.084643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.084778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.084803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.084951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.084977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.085167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.085200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.085366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.085390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.085548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.085573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.085705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.085730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.085851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.085875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.086071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.086099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.086266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.086291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.086441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.086465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.086588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.086631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.086786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.086814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.087002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.087027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.087179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.087204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.087354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.087379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.087542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.087567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.087736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.087762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.087920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.087945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.088158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.088184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.088336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.088361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.088541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.606 [2024-07-24 19:06:59.088566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.606 qpair failed and we were unable to recover it. 00:24:21.606 [2024-07-24 19:06:59.088712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.088736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.088885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.088910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.089065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.089114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.089268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.089292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.089465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.089489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.089655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.089681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.089871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.089896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.090065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.090092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.090282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.090313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.090439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.090463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.090635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.090661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.090784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.090810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.090983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.091008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.091140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.091166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.091322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.091348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.091478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.091503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.091627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.091669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.091829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.091854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.092032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.092057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.092257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.092282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.092439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.092465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.092652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.092677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.092809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.092849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.093027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.093053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.093205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.093231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.093384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.093424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.093583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.093607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.093781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.093807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.093931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.093955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.094076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.094100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.094262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.094287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.094435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.094459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.094637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.094661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.094788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.094813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.094968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.094993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.095113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.095138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.095284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.095308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.095482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.095506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.095633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.095658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.095807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.095832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.095985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.096010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.096144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.096171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.096323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.096349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.096504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.096529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.607 qpair failed and we were unable to recover it. 00:24:21.607 [2024-07-24 19:06:59.097165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.607 [2024-07-24 19:06:59.097191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.097367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.097392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.097540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.097566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.097715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.097740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.097874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.097898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.098025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.098051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.098189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.098214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.098344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.098369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.098515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.098540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.098669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.098694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.098816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.098841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.099016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.099041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.099174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.099199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.099331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.099356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.099506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.099531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.099675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.099700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.099878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.099903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.100060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.100085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.100269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.100294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.100476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.100500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.100623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.100649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.100825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.100850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.101007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.101031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.101181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.101207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.101339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.101363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.101513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.101537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.101669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.101694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.101819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.101844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.101997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.102022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.102150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.102175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.102307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.102332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.102477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.102502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.102654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.102682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.102823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.102848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.103026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.103050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.103198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.103222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.103376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.103402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.103522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.103546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.608 [2024-07-24 19:06:59.103695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.608 [2024-07-24 19:06:59.103719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.608 qpair failed and we were unable to recover it. 00:24:21.609 [2024-07-24 19:06:59.103874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.609 [2024-07-24 19:06:59.103900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.609 qpair failed and we were unable to recover it. 00:24:21.609 [2024-07-24 19:06:59.104057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.609 [2024-07-24 19:06:59.104081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.609 qpair failed and we were unable to recover it. 00:24:21.609 [2024-07-24 19:06:59.104267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.609 [2024-07-24 19:06:59.104308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.609 qpair failed and we were unable to recover it. 00:24:21.609 [2024-07-24 19:06:59.104471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.609 [2024-07-24 19:06:59.104498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.609 qpair failed and we were unable to recover it. 00:24:21.609 [2024-07-24 19:06:59.104673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.609 [2024-07-24 19:06:59.104699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.609 qpair failed and we were unable to recover it. 00:24:21.609 [2024-07-24 19:06:59.104854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.609 [2024-07-24 19:06:59.104881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.609 qpair failed and we were unable to recover it. 00:24:21.609 [2024-07-24 19:06:59.105062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.609 [2024-07-24 19:06:59.105088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.609 qpair failed and we were unable to recover it. 00:24:21.609 [2024-07-24 19:06:59.105235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.609 [2024-07-24 19:06:59.105261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.609 qpair failed and we were unable to recover it. 00:24:21.609 [2024-07-24 19:06:59.105417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.609 [2024-07-24 19:06:59.105443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.609 qpair failed and we were unable to recover it. 00:24:21.609 [2024-07-24 19:06:59.105593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.609 [2024-07-24 19:06:59.105618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.609 qpair failed and we were unable to recover it. 00:24:21.609 [2024-07-24 19:06:59.105737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.609 [2024-07-24 19:06:59.105762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.609 qpair failed and we were unable to recover it. 00:24:21.609 [2024-07-24 19:06:59.105913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.609 [2024-07-24 19:06:59.105937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.609 qpair failed and we were unable to recover it. 00:24:21.609 [2024-07-24 19:06:59.106061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.609 [2024-07-24 19:06:59.106086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.609 qpair failed and we were unable to recover it. 00:24:21.609 [2024-07-24 19:06:59.106250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.609 [2024-07-24 19:06:59.106275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.609 qpair failed and we were unable to recover it. 00:24:21.609 [2024-07-24 19:06:59.106400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.609 [2024-07-24 19:06:59.106425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.609 qpair failed and we were unable to recover it. 00:24:21.609 [2024-07-24 19:06:59.106597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.609 [2024-07-24 19:06:59.106622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.609 qpair failed and we were unable to recover it. 00:24:21.609 [2024-07-24 19:06:59.106772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.609 [2024-07-24 19:06:59.106797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.609 qpair failed and we were unable to recover it. 00:24:21.609 [2024-07-24 19:06:59.106926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.609 [2024-07-24 19:06:59.106950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.609 qpair failed and we were unable to recover it. 00:24:21.609 [2024-07-24 19:06:59.107096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.609 [2024-07-24 19:06:59.107127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.609 qpair failed and we were unable to recover it. 00:24:21.609 [2024-07-24 19:06:59.107279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.107304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.107427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.107456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.107574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.107599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.107755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.107779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.107940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.107968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.108147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.108174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.108353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.108379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.108510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.108536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.108685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.108711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.108868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.108895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.109075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.109112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.109250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.109277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.109438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.109464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.109608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.109635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.109758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.109783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.109915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.109940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.110071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.110098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.110242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.110268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.110449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.110474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.110622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.110647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.110774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.110801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.110932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.110957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.111092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.111125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.111314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.111339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.111468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.111492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.111643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.111667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.610 [2024-07-24 19:06:59.111793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.610 [2024-07-24 19:06:59.111819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.610 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.111994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.112018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.112190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.112217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.112335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.112361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.112514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.112540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.112693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.112719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.112869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.112895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.113047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.113072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.113231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.113258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.113439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.113464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.113596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.113621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.113795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.113819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.113962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.113988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.114140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.114166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.114319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.114344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.114470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.114495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.114650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.114674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.114803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.114831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.114978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.115004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.115140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.115167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.115320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.115346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.115476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.115501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.115628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.115654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.115831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.115857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.116034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.116059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.116197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.116224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.116353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.116378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.611 qpair failed and we were unable to recover it. 00:24:21.611 [2024-07-24 19:06:59.116527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.611 [2024-07-24 19:06:59.116552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.116682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.116708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.116891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.116917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.117067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.117093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.117235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.117262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.117411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.117437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.117593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.117619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.117771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.117797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.117942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.117967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.118144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.118171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.118352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.118377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.118534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.118561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.118709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.118735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.118865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.118889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.119042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.119067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.119249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.119278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.119427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.119451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.119577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.119601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.119781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.119806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.119947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.119971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.120139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.120167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.120316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.120342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.120496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.120522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.120650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.120677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.120826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.120852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.121011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.121037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.121189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.612 [2024-07-24 19:06:59.121215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.612 qpair failed and we were unable to recover it. 00:24:21.612 [2024-07-24 19:06:59.121393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.121419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.121569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.121595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.121776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.121802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.121951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.121977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.122134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.122161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.122206] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:24:21.613 [2024-07-24 19:06:59.122272] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.613 [2024-07-24 19:06:59.122313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.122339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.122489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.122513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.122659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.122683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.122843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.122867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.123016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.123042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.123196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.123222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.123354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.123379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.123502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.123528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.123652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.123678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.123831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.123857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.124010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.124036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.124214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.124240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.124364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.124390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.124543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.124569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.124724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.124750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.124901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.124927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.125077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.125112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.125235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.125261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.125411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.125438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.125622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.125647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.613 [2024-07-24 19:06:59.125803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.613 [2024-07-24 19:06:59.125829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.613 qpair failed and we were unable to recover it. 00:24:21.614 [2024-07-24 19:06:59.125953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.614 [2024-07-24 19:06:59.125979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.614 qpair failed and we were unable to recover it. 00:24:21.614 [2024-07-24 19:06:59.126148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.614 [2024-07-24 19:06:59.126175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.614 qpair failed and we were unable to recover it. 00:24:21.614 [2024-07-24 19:06:59.126337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.614 [2024-07-24 19:06:59.126363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.614 qpair failed and we were unable to recover it. 00:24:21.614 [2024-07-24 19:06:59.126523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.614 [2024-07-24 19:06:59.126549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.614 qpair failed and we were unable to recover it. 00:24:21.614 [2024-07-24 19:06:59.126678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.614 [2024-07-24 19:06:59.126703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.614 qpair failed and we were unable to recover it. 00:24:21.614 [2024-07-24 19:06:59.126848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.614 [2024-07-24 19:06:59.126873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.614 qpair failed and we were unable to recover it. 00:24:21.614 [2024-07-24 19:06:59.127027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.614 [2024-07-24 19:06:59.127054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.614 qpair failed and we were unable to recover it. 00:24:21.614 [2024-07-24 19:06:59.127211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.614 [2024-07-24 19:06:59.127237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.614 qpair failed and we were unable to recover it. 00:24:21.614 [2024-07-24 19:06:59.127404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.614 [2024-07-24 19:06:59.127430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.614 qpair failed and we were unable to recover it. 00:24:21.614 [2024-07-24 19:06:59.127601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.614 [2024-07-24 19:06:59.127627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.614 qpair failed and we were unable to recover it. 00:24:21.614 [2024-07-24 19:06:59.127761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.614 [2024-07-24 19:06:59.127786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.614 qpair failed and we were unable to recover it. 00:24:21.614 [2024-07-24 19:06:59.127959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.614 [2024-07-24 19:06:59.127985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.614 qpair failed and we were unable to recover it. 00:24:21.614 [2024-07-24 19:06:59.128157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.614 [2024-07-24 19:06:59.128183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.614 qpair failed and we were unable to recover it. 00:24:21.614 [2024-07-24 19:06:59.128364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.614 [2024-07-24 19:06:59.128390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.614 qpair failed and we were unable to recover it. 00:24:21.614 [2024-07-24 19:06:59.128545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.614 [2024-07-24 19:06:59.128571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.614 qpair failed and we were unable to recover it. 00:24:21.614 [2024-07-24 19:06:59.128730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.614 [2024-07-24 19:06:59.128756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.614 qpair failed and we were unable to recover it. 00:24:21.614 [2024-07-24 19:06:59.128911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.614 [2024-07-24 19:06:59.128937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.614 qpair failed and we were unable to recover it. 00:24:21.614 [2024-07-24 19:06:59.129096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.614 [2024-07-24 19:06:59.129129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.614 qpair failed and we were unable to recover it. 00:24:21.614 [2024-07-24 19:06:59.129285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.614 [2024-07-24 19:06:59.129311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.614 qpair failed and we were unable to recover it. 00:24:21.614 [2024-07-24 19:06:59.129445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.614 [2024-07-24 19:06:59.129471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.614 qpair failed and we were unable to recover it. 00:24:21.615 [2024-07-24 19:06:59.129621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.615 [2024-07-24 19:06:59.129647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.615 qpair failed and we were unable to recover it. 00:24:21.615 [2024-07-24 19:06:59.129800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.615 [2024-07-24 19:06:59.129826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.615 qpair failed and we were unable to recover it. 00:24:21.615 [2024-07-24 19:06:59.129965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.615 [2024-07-24 19:06:59.129991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.615 qpair failed and we were unable to recover it. 00:24:21.615 [2024-07-24 19:06:59.130149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.615 [2024-07-24 19:06:59.130176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.615 qpair failed and we were unable to recover it. 00:24:21.615 [2024-07-24 19:06:59.130342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.615 [2024-07-24 19:06:59.130368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.615 qpair failed and we were unable to recover it. 00:24:21.615 [2024-07-24 19:06:59.130522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.615 [2024-07-24 19:06:59.130548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.615 qpair failed and we were unable to recover it. 00:24:21.615 [2024-07-24 19:06:59.130699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.615 [2024-07-24 19:06:59.130725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.615 qpair failed and we were unable to recover it. 00:24:21.615 [2024-07-24 19:06:59.130877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.615 [2024-07-24 19:06:59.130904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.615 qpair failed and we were unable to recover it. 00:24:21.615 [2024-07-24 19:06:59.131058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.615 [2024-07-24 19:06:59.131084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.615 qpair failed and we were unable to recover it. 00:24:21.615 [2024-07-24 19:06:59.131223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.615 [2024-07-24 19:06:59.131249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.615 qpair failed and we were unable to recover it. 00:24:21.615 [2024-07-24 19:06:59.131399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.615 [2024-07-24 19:06:59.131425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.615 qpair failed and we were unable to recover it. 00:24:21.615 [2024-07-24 19:06:59.131574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.615 [2024-07-24 19:06:59.131600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.615 qpair failed and we were unable to recover it. 00:24:21.615 [2024-07-24 19:06:59.131736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.615 [2024-07-24 19:06:59.131762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.615 qpair failed and we were unable to recover it. 00:24:21.615 [2024-07-24 19:06:59.131913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.615 [2024-07-24 19:06:59.131938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.615 qpair failed and we were unable to recover it. 00:24:21.615 [2024-07-24 19:06:59.132114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.615 [2024-07-24 19:06:59.132140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.615 qpair failed and we were unable to recover it. 00:24:21.615 [2024-07-24 19:06:59.132265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.615 [2024-07-24 19:06:59.132290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.615 qpair failed and we were unable to recover it. 00:24:21.615 [2024-07-24 19:06:59.132408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.615 [2024-07-24 19:06:59.132434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.615 qpair failed and we were unable to recover it. 00:24:21.615 [2024-07-24 19:06:59.132583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.615 [2024-07-24 19:06:59.132624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.615 qpair failed and we were unable to recover it. 00:24:21.615 [2024-07-24 19:06:59.132803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.615 [2024-07-24 19:06:59.132833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.615 qpair failed and we were unable to recover it. 00:24:21.615 [2024-07-24 19:06:59.133023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.615 [2024-07-24 19:06:59.133051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.615 qpair failed and we were unable to recover it. 00:24:21.615 [2024-07-24 19:06:59.133191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.615 [2024-07-24 19:06:59.133220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.615 qpair failed and we were unable to recover it. 00:24:21.615 [2024-07-24 19:06:59.133398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.615 [2024-07-24 19:06:59.133425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.615 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.133583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.133614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.133773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.133798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.133931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.133956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.134108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.134147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.134277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.134304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.134432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.134458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.134585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.134611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.134762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.134787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.134926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.134951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.135130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.135159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.135329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.135368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.135530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.135556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.135693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.135722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.135863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.135889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.136030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.136056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.136207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.136235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.136391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.136418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.136562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.136588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.136715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.136740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.136896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.136920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.137068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.137093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.137284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.137313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.137447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.137474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.137631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.137658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.137781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.137807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.137936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.137964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.138122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.138149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.138298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.899 [2024-07-24 19:06:59.138325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.899 qpair failed and we were unable to recover it. 00:24:21.899 [2024-07-24 19:06:59.138453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.138478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.138629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.138654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.138774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.138799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.138944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.138969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.139120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.139145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.139293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.139318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.139484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.139509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.139664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.139689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.139815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.139843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.139995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.140021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.140172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.140199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.140381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.140407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.140576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.140601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.140761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.140788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.140937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.140964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.141086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.141118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.141275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.141301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.141437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.141463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.141595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.141621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.141774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.141800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.141932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.141959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.142111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.142137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.142257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.142282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.142433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.142457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.142607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.142634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.142801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.142827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.142982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.143007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.143162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.143188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.143308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.143334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.143520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.143545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.143668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.143693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.143858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.143883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.144031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.144056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.900 qpair failed and we were unable to recover it. 00:24:21.900 [2024-07-24 19:06:59.144227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.900 [2024-07-24 19:06:59.144266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.144427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.144454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.144579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.144605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.144752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.144778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.144935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.144960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.145111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.145137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.145295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.145325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.145488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.145514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.145667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.145693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.145827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.145852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.146000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.146025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.146150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.146176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.146328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.146353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.146505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.146531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.146686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.146711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.146865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.146889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.147043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.147067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.147253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.147279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.147433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.147459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.147589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.147613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.147769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.147794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.147949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.147974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.148099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.148128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.148254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.148279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.148416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.148442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.148592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.148617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.148744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.148770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.148922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.148947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.149126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.149151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.149279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.149304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.149423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.149447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.149576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.149618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.149805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.149830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.149987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.150012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.150191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.150217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.150368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.150392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.150544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.150581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.901 [2024-07-24 19:06:59.150715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.901 [2024-07-24 19:06:59.150740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.901 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.150916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.150941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.151062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.151087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.151217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.151241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.151393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.151417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.151594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.151620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.151749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.151775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.151919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.151943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.152095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.152126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.152251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.152280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.152433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.152458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.152609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.152634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.152755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.152779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.152908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.152932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.153111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.153137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.153291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.153316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.153454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.153493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.153660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.153687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.153846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.153872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.154000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.154026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.154150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.154176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.154351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.154377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.154534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.154562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.154724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.154751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.154901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.154927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.155073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.155119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.155254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.155281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.155401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.155426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.155558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.155585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.155738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.155763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.155911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.155938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.156071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.156097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.156281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.156307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.156432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.156457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.156636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.156661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.156791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.902 [2024-07-24 19:06:59.156816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.902 qpair failed and we were unable to recover it. 00:24:21.902 [2024-07-24 19:06:59.156943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.156968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.157115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.157142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.157295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.157322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.157453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.157479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.157666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.157692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.157846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.157872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.157997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.158022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.158180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.158208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.158361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.158387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.158537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.158563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.158712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.158737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.158884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.158910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.159047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.159072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.159235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.159266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.159420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.159446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.159626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.159652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.159787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.159812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.159967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.159994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.160129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.160155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.160312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.160337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.903 [2024-07-24 19:06:59.160489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.160516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.160671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.160698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.160876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.160901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.161025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.161051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.161205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.161231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.161379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.161404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.161556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.161582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.161750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.161775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.161955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.161981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.162116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.162142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.162270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.162295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.162411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.162436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.162560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.162585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.162718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.162742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.162902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.903 [2024-07-24 19:06:59.162928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.903 qpair failed and we were unable to recover it. 00:24:21.903 [2024-07-24 19:06:59.163076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.163113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.163267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.163293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.163428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.163453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.163585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.163610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.163789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.163814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.163946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.163972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.164107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.164134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.164267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.164292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.164424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.164450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.164568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.164593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.164722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.164747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.164904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.164930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.165052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.165078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.165261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.165286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.165413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.165438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.165566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.165590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.165741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.165766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.165890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.165916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.166038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.166068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.166226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.166253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.166405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.166430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.166557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.166582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.166738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.166763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.166896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.166920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.167069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.167094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.167228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.167253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.167430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.167454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.167596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.167621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.167772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.167797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.167932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.167957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.168114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.904 [2024-07-24 19:06:59.168139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.904 qpair failed and we were unable to recover it. 00:24:21.904 [2024-07-24 19:06:59.168263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.168289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.168423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.168447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.168575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.168602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.168785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.168811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.168935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.168959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.169115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.169141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.169288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.169314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.169438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.169463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.169595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.169620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.169769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.169794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.169928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.169954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.170077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.170114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.170267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.170292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.170428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.170452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.170605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.170631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.170761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.170786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.170904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.170928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.171096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.171127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.171305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.171330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.171487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.171513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.171649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.171674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.171824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.171849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.171995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.172020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.172169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.172195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.172346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.172371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.172509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.172533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.172662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.172688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.172839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.172868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.173001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.173026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.173181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.173206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.173360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.173385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.173536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.173561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.173716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.173741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.173862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.173888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.174037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.174061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.174194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.174219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.174367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.174392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.174550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.905 [2024-07-24 19:06:59.174576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.905 qpair failed and we were unable to recover it. 00:24:21.905 [2024-07-24 19:06:59.174730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.174755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.174890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.174916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.175076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.175106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.175261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.175286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.175401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.175426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.175558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.175582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.175732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.175758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.175891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.175916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.176047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.176072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.176206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.176232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.176361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.176386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.176537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.176561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.176711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.176737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.176872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.176897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.177047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.177072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.177210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.177235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.177426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.177451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.177582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.177607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.177763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.177788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.177938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.177963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.178123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.178149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.178296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.178321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.178475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.178500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.178676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.178701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.178859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.178884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.179040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.179066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.179230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.179256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.179389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.179414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.179554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.179579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.179725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.179754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.179881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.179906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.180057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.180083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.180241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.180266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.180388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.180414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.180592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.180618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.180769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.180794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.180945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.180969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.906 [2024-07-24 19:06:59.181097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.906 [2024-07-24 19:06:59.181127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.906 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.181277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.181302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.181458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.181482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.181637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.181662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.181797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.181823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.181970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.181995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.182154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.182179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.182322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.182348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.182468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.182494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.182639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.182664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.182820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.182846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.182976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.183000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.183156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.183182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.183308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.183334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.183466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.183491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.183642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.183668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.183813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.183837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.183988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.184013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.184148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.184187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.184318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.184344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.184519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.184545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.184666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.184690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.184836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.184862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.185004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.185028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.185180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.185205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.185351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.185377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.185498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.185523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.185644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.185670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.185857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.185882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.186007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.186032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.186145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.186171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.186326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.186351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.186504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.186533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.186709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.186735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.186857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.186883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.187015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.187040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.187184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.187210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.187365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.907 [2024-07-24 19:06:59.187391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.907 qpair failed and we were unable to recover it. 00:24:21.907 [2024-07-24 19:06:59.187519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.187543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.187692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.187717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.187870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.187895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.188027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.188052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.188227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.188253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.188402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.188427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.188551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.188576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.188734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.188759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.188915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.188941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.189073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.189098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.189254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.189280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.189429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.189454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.189571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.189595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.189716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.189742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.189894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.189920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.190075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.190114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.190294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.190320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.190479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.190505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.190639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.190663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.190825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.190851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.191006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.191031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.191206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.191232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.191410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.191435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.191607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.191633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.191759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.191785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.191937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.191962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.192094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.192123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.192276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.192302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.192418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.192442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.192564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.192589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.192738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.192764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.192915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.192939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.908 qpair failed and we were unable to recover it. 00:24:21.908 [2024-07-24 19:06:59.193060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.908 [2024-07-24 19:06:59.193085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.193217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.193243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.193348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:21.909 [2024-07-24 19:06:59.193372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.193397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.193532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.193559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.193706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.193731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.193892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.193916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.194070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.194097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.194248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.194274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.194427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.194452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.194606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.194631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.194777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.194802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.194952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.194978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.195114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.195140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.195370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.195395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.195589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.195615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.195737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.195766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.195917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.195942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.196070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.196097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.196235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.196262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.196391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.196418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.196541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.196567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.196710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.196735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.196880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.196906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.197080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.197112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.197237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.197263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.197390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.197416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.197543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.197570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.197749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.197774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.197897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.197922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.198081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.198114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.198253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.198278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.198438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.198464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.198590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.198616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.198749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.198774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.198932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.198957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.199079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.199110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.199263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.199288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.199419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.909 [2024-07-24 19:06:59.199445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.909 qpair failed and we were unable to recover it. 00:24:21.909 [2024-07-24 19:06:59.199599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.199626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.200114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.200144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.200305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.200333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.200472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.200499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.200633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.200664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.200821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.200846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.200996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.201022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.201147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.201175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.201301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.201326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.201484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.201509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.201662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.201687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.201835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.201861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.201990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.202015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.202181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.202207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.202361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.202388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.202510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.202536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.202666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.202691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.202842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.202868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.203024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.203050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.203210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.203236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.203387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.203413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.203568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.203593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.203768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.203793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.203936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.203962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.204098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.204140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.204270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.204297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.204484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.204509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.204658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.204684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.204831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.204857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.204980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.205006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.205171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.205198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.205331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.205356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.205512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.205539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.205715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.205740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.205927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.205952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.206099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.206130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.206308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.206334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.910 [2024-07-24 19:06:59.206460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.910 [2024-07-24 19:06:59.206486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.910 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.206636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.206661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.206841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.206865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.207010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.207036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.207272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.207298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.207424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.207448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.207579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.207605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.207737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.207767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.207921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.207946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.208127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.208152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.208301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.208327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.208476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.208501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.208629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.208653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.208801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.208826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.208950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.208975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.209112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.209138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.209299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.209324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.209451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.209476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.209659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.209684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.209813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.209839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.209989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.210015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.210155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.210182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.210333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.210359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.210487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.210512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.210660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.210685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.210835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.210860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.211020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.211044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.211179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.211206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.211332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.211357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.211483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.211508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.211636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.211669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.211843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.211869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.212044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.212070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.212250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.212277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.212411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.212436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.212559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.212586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.212736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.212761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.212886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.212910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.911 qpair failed and we were unable to recover it. 00:24:21.911 [2024-07-24 19:06:59.213090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.911 [2024-07-24 19:06:59.213122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.213247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.213273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.213424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.213449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.213581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.213607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.213758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.213785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.213910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.213936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.214116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.214143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.214281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.214307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.214460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.214485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.214606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.214636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.214786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.214811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.214938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.214963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.215092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.215124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.215259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.215284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.215410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.215434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.215609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.215634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.215781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.215806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.215953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.215979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.216125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.216150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.216309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.216334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.216455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.216481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.216661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.216686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.216856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.216881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.217056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.217082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.217259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.217284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.217407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.217431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.217585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.217610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.217734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.217761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.217915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.217940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.218085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.218117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.218266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.218291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.218415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.218439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.218603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.218630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.218767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.218793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.218917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.218942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.219093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.219134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.219287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.219321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.219491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.912 [2024-07-24 19:06:59.219517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.912 qpair failed and we were unable to recover it. 00:24:21.912 [2024-07-24 19:06:59.219649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.219674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.219850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.219876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.220006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.220030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.220164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.220190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.220311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.220336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.220499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.220524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.220677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.220702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.220829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.220853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.221026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.221051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.221200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.221226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.221353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.221378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.221558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.221589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.221711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.221736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.221888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.221912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.222039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.222064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.222223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.222249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.222379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.222405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.222563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.222588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.222714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.222739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.222895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.222919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.223038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.223063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.223216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.223242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.223372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.223397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.223573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.223598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.223755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.223780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.223905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.223930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.224086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.224131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.224258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.224282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.224408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.224432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.224607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.224632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.224754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.224779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.913 [2024-07-24 19:06:59.224914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.913 [2024-07-24 19:06:59.224939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.913 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.225091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.225123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.225284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.225310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.225460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.225484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.225630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.225654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.225785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.225810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.225957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.225983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.226131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.226157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.226343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.226368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.226500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.226525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.226648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.226673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.226791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.226815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.226957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.226982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.227158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.227184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.227309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.227334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.227487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.227511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.227636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.227661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.227811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.227835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.227980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.228005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.228134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.228160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.228310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.228339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.228472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.228497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.228656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.228681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.228849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.228874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.228995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.229021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.229186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.229213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.229356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.229382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.229534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.229560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.229687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.229712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.229862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.229886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.230010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.230035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.230184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.230209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.230364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.230390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.230545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.230571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.230718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.230743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.230869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.230895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.231028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.231053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.231186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.231212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.914 [2024-07-24 19:06:59.231340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.914 [2024-07-24 19:06:59.231365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.914 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.231510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.231535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.231687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.231713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.231839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.231864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.232036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.232061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.232204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.232231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.232358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.232384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.232512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.232538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.232672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.232698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.232858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.232884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.233039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.233064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.233218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.233245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.233379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.233403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.233534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.233559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.233711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.233737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.233885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.233911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.234060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.234084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.234233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.234258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.234385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.234410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.234563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.234588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.234741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.234767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.234914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.234940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.235116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.235146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.235275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.235301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.235422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.235448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.235575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.235601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.235755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.235780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.235935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.235961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.236112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.236137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.236267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.236292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.236444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.236469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.236588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.236613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.236742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.236768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.236914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.236939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.237055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.237079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.237221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.237247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.237388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.237412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.237567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.237593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.915 qpair failed and we were unable to recover it. 00:24:21.915 [2024-07-24 19:06:59.237743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.915 [2024-07-24 19:06:59.237767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.237946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.237970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.238106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.238132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.238291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.238317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.238467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.238493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.238645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.238670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.238815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.238841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.238972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.238997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.239172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.239198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.239352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.239378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.239508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.239534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.239715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.239740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.239894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.239919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.240071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.240096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.240251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.240276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.240399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.240425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.240574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.240598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.240754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.240779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.240931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.240956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.241129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.241155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.241285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.241310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.241436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.241462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.241639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.241664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.241815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.241841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.242019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.242049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.242176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.242203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.242352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.242378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.242530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.242555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.242715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.242740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.242893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.242917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.243059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.243083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.243243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.243268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.243443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.243469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.243611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.243636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.243789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.243814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.243987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.244013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.244137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.244163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.244318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.916 [2024-07-24 19:06:59.244344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.916 qpair failed and we were unable to recover it. 00:24:21.916 [2024-07-24 19:06:59.244525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.244550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.244684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.244709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.244836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.244861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.244986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.245012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.245150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.245176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.245343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.245368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.245517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.245543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.245716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.245742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.245893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.245918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.246044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.246070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.246205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.246231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.246358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.246383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.246537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.246563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.246698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.246723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.246876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.246901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.247033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.247058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.247198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.247223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.247367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.247392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.247523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.247549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.247696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.247721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.247897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.247923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.248076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.248108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.248243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.248268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.248397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.248422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.248597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.248623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.248752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.248777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.248895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.248924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.249115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.249141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.249272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.249297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.249429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.249455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.249585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.249611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.249762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.249787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.249935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.249960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.250085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.917 [2024-07-24 19:06:59.250117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.917 qpair failed and we were unable to recover it. 00:24:21.917 [2024-07-24 19:06:59.250277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.250302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.250458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.250484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.250665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.250691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.250818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.250843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.250991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.251016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.251142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.251169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.251301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.251326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.251456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.251482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.251614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.251640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.251815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.251840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.252019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.252044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.252198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.252224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.252350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.252377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.252557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.252583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.252734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.252759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.252918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.252944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.253072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.253097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.253236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.253261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.253434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.253458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.253645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.253686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.253818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.253845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.254002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.254028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.254186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.254213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.254391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.254417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.254567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.254592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.254718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.254743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.254891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.254916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.255064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.255089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.255230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.255255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.255378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.255404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.255533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.255558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.255713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.255739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.255868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.255894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.256025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.256050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.256213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.256244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.256394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.918 [2024-07-24 19:06:59.256420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.918 qpair failed and we were unable to recover it. 00:24:21.918 [2024-07-24 19:06:59.256567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.256592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.256723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.256748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.256899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.256926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.257100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.257131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.257264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.257289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.257438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.257464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.257587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.257612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.257744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.257784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.257948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.257976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.258134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.258160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.258287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.258315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.258451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.258476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.258610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.258635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.258785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.258811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.258971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.258996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.259153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.259179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.259327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.259353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.259478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.259503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.259655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.259681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.259815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.259841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.259994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.260020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.260154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.260179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.260334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.260359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.260492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.260517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.260672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.260697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.260827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.260852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.260973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.260999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.261119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.261145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.261319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.261344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.261467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.261492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.261624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.261650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.261796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.261822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.261976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.262001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.262134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.262160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.262342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.262367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.262493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.262519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.262673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.262698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.262818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.262848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.919 qpair failed and we were unable to recover it. 00:24:21.919 [2024-07-24 19:06:59.262969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.919 [2024-07-24 19:06:59.262995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.263154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.263180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.263342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.263372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.263548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.263573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.263706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.263731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.263885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.263911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.264047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.264073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.264210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.264235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.264366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.264391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.264546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.264572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.264757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.264782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.264938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.264963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.265134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.265160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.265305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.265330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.265457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.265483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.265634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.265659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.265794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.265819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.265948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.265973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.266131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.266157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.266307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.266332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.266483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.266508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.266634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.266660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.266810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.266835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.266975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.267015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.267173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.267201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.267339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.267364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.267503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.267534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.267663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.267690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.267829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.267855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.268037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.268063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.268198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.268225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.268345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.268370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.268501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.268526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.268650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.268676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.268813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.268838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.268996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.269021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.269178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.269203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.920 [2024-07-24 19:06:59.269358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.920 [2024-07-24 19:06:59.269382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.920 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.269502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.269527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.269681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.269706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.269840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.269865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.269987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.270013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.270166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.270192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.270324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.270350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.270503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.270529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.270651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.270676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.270851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.270876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.271011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.271036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.271160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.271186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.271338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.271364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.271488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.271514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.271672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.271697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.271826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.271853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.271982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.272007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.272163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.272189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.272345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.272371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.272491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.272516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.272675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.272700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.272848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.272873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.273053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.273078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.273264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.273291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.273415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.273441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.273570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.273596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.273747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.273772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.273944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.273969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.274130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.274155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.274286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.274316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.274445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.274472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.274593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.274620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.274806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.274831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.275007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.275032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.275208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.275233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.275361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.275386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.275559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.275583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.275712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.275736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.921 qpair failed and we were unable to recover it. 00:24:21.921 [2024-07-24 19:06:59.275872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.921 [2024-07-24 19:06:59.275899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.276047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.276073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.276218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.276257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.276418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.276444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.276600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.276626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.276810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.276836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.276961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.276989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.277148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.277174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.277302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.277328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.277504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.277529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.277705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.277730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.277867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.277895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.278065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.278091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.278255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.278281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.278431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.278456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.278578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.278603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.278755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.278781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.278951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.278976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.279132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.279158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.279305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.279330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.279478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.279504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.279654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.279679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.279816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.279843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.280000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.280027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.280190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.280216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.280351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.280379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.280530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.280555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.280711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.280737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.280891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.280917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.281060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.281084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.281230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.281257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.281383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.281409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.922 [2024-07-24 19:06:59.281564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.922 [2024-07-24 19:06:59.281590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.922 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.281725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.281750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.281922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.281947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.282098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.282132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.282289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.282314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.282466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.282491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.282644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.282669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.282793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.282818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.282950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.282975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.283098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.283129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.283281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.283307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.283441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.283466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.283639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.283664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.283788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.283817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.283942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.283967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.284113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.284139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.284269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.284294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.284475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.284500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.284624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.284649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.284800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.284824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.284976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.285001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.285150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.285176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.285331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.285357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.285501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.285526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.285680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.285705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.285852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.285878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.285998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.286023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.286182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.286208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.286357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.286382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.286525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.286550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.286703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.286728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.286847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.286873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.286997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.287022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.287178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.287203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.287323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.287348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.287468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.287493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.287618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.287645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.287803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.287829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.287956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.923 [2024-07-24 19:06:59.287982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.923 qpair failed and we were unable to recover it. 00:24:21.923 [2024-07-24 19:06:59.288129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.288155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.288287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.288318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.288481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.288506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.288630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.288655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.288811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.288836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.288991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.289015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.289169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.289196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.289345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.289370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.289517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.289542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.289696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.289721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.289871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.289896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.290041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.290066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.290231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.290256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.290386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.290410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.290529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.290555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.290683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.290709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.290860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.290885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.291022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.291047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.291203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.291229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.291382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.291407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.291560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.291585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.291752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.291777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.291951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.291977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.292128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.292154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.292292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.292316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.292439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.292464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.292591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.292618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.292751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.292776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.292931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.292960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.293142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.293168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.293318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.293343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.293495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.293521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.293646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.293671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.293799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.293824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.293947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.293971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.294098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.294129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.294279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.294304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.294461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.924 [2024-07-24 19:06:59.294485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.924 qpair failed and we were unable to recover it. 00:24:21.924 [2024-07-24 19:06:59.294633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.294659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.294844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.294870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.295044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.295069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.295226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.295252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.295408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.295433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.295585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.295611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.295734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.295759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.295884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.295908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.296032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.296057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.296224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.296249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.296399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.296424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.296572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.296597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.296760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.296785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.296908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.296933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.297085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.297117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.297266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.297291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.297422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.297448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.297636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.297662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.297823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.297848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.297995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.298021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.298169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.298196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.298325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.298352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.298474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.298500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.298624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.298649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.298774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.298798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.298945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.298970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.299144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.299170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.299299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.299324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.299498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.299523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.299670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.299695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.299831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.299856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.299976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.300004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.300126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.300151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.300300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.300327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.300479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.300504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.300650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.300675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.300800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.300826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.300983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.301008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.925 qpair failed and we were unable to recover it. 00:24:21.925 [2024-07-24 19:06:59.301140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.925 [2024-07-24 19:06:59.301166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.301321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.301346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.301503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.301528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.301660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.301684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.301847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.301872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.302012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.302037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.302187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.302213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.302335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.302360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.302487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.302511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.302675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.302701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.302832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.302857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.302980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.303005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.303134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.303159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.303296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.303321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.303449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.303474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.303620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.303646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.303786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.303812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.303953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.303977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.304120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.304145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.304300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.304325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.304461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.304491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.304646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.304671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.304798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.304824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.304975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.305001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.305158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.305185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.305320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.305345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.305517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.305541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.305665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.305691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.305840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.305865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.306040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.306065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.306221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.306246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.306395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.306420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.306552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.306577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.306724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.306748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.306906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.306931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.307083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.307116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.307246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.307271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.307404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.307431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.926 qpair failed and we were unable to recover it. 00:24:21.926 [2024-07-24 19:06:59.307564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.926 [2024-07-24 19:06:59.307589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.307706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.307731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.307855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.307882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.308012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.308037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.308162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.308188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.308336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.308361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.308488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.308513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.308639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.308664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.308779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.308804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.308920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.308949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.309071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.309096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.309244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.309270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.309403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.309407] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.927 [2024-07-24 19:06:59.309428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 [2024-07-24 19:06:59.309440] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of eventsqpair failed and we were unable to recover it. 00:24:21.927 at runtime. 00:24:21.927 [2024-07-24 19:06:59.309456] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.927 [2024-07-24 19:06:59.309468] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.927 [2024-07-24 19:06:59.309478] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.927 [2024-07-24 19:06:59.309591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.309546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:21.927 [2024-07-24 19:06:59.309617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.309580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:21.927 [2024-07-24 19:06:59.309626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:21.927 [2024-07-24 19:06:59.309629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:21.927 [2024-07-24 19:06:59.309762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.309786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.309913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.309938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.310088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.310119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.310252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.310278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.310407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.310431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.310560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.310585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.310727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.310752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.310878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.310904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.311025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.311051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.311193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.311218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.311370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.311395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.311516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.311541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.311659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.311683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.311829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.311855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.312009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.312034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.312160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.927 [2024-07-24 19:06:59.312186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.927 qpair failed and we were unable to recover it. 00:24:21.927 [2024-07-24 19:06:59.312307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.312332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.312560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.312585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.312709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.312734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.312880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.312910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.313029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.313055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.313180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.313205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.313325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.313349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.313472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.313498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.313632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.313657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.313810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.313835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.313964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.313988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.314140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.314166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.314285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.314310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.314429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.314455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.314604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.314629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.314744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.314768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.314906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.314931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.315060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.315086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.315249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.315275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.315428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.315453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.315652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.315677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.315799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.315824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.315942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.315969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.316113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.316139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.316293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.316318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.316437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.316462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.316590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.316615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.316807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.316832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.316953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.316978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.317129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.317154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.317281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.317310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.317475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.317500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.317649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.317674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.317802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.317827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.317950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.317976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.318106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.318132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.318287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.318313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.318430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.928 [2024-07-24 19:06:59.318456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.928 qpair failed and we were unable to recover it. 00:24:21.928 [2024-07-24 19:06:59.318655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.318681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.318802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.318827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.318954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.318979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.319111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.319136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.319255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.319280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.319412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.319437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.319599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.319624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.319749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.319774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.319898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.319923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.320048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.320073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.320226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.320252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.320395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.320422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.320566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.320591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.320710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.320735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.320863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.320887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.321039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.321064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.321216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.321241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.321389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.321414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.321526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.321551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.321679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.321704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.321826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.321851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.322003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.322029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.322189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.322215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.322353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.322377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.322577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.322602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.322744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.322770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.322924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.322967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.323099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.323135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.323274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.323301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.323450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.323476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.323638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.323663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.323866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.323892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.324052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.324078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.324225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.324250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.324399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.324424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.324556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.324582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.324735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.324760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.324909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.324934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.929 qpair failed and we were unable to recover it. 00:24:21.929 [2024-07-24 19:06:59.325090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.929 [2024-07-24 19:06:59.325119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.325270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.325295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.325449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.325474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.325625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.325650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.325793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.325818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.325956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.325980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.326135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.326161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.326291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.326317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.326441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.326466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.326620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.326645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.326801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.326826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.326978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.327003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.327155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.327195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.327355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.327382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.327510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.327536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.327653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.327678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.327813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.327838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.327957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.327982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.328138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.328165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.328315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.328340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.328473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.328497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.328623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.328649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.328812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.328838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.329040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.329065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.329195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.329222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.329354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.329379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.329531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.329555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.329678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.329705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.329910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.329936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.330063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.330088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.330220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.330246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.330368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.330393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.330545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.330570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.330721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.330747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.330869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.330893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.331028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.331057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.331212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.331238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.331393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.930 [2024-07-24 19:06:59.331418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.930 qpair failed and we were unable to recover it. 00:24:21.930 [2024-07-24 19:06:59.331575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.331600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.331752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.331778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.331906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.331931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.332080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.332112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.332241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.332268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.332388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.332413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.332560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.332587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.332742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.332769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.332892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.332917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.333047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.333073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.333232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.333258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.333383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.333408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.333538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.333565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.333698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.333723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.333854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.333880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.334008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.334034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.334171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.334197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.334325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.334351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.334472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.334498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.334628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.334654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.334787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.334814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.334969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.334995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.335146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.335172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.335289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.335313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.335440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.335466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.335668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.335693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.335887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.335913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.336066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.336091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.336251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.336276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.336403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.336428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.336555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.336580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.931 qpair failed and we were unable to recover it. 00:24:21.931 [2024-07-24 19:06:59.336709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.931 [2024-07-24 19:06:59.336733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.336881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.336907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.337057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.337082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.337213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.337239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.337369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.337397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.337527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.337552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.337667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.337701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.337881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.337908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.338035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.338060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.338198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.338225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.338357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.338382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.338535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.338560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.338694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.338721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.338857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.338883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.339014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.339039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.339167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.339193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.339348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.339373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.339493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.339519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.339636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.339662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.339860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.339886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.340033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.340059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.340189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.340215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.340364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.340388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.340524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.340550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.340696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.340721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.340880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.340905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.341023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.341048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.341174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.341199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.341349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.341374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.341495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.341521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.341663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.341688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.341834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.341859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.342050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.342076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.342239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.342265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.342413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.342438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.342587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.342613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.342765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.342789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.932 [2024-07-24 19:06:59.342922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.932 [2024-07-24 19:06:59.342948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.932 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.343097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.343130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.343266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.343292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.343415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.343440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.343566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.343592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.343716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.343741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.343873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.343899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.344026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.344053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.344178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.344203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.344355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.344385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.344543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.344568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.344725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.344750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.344910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.344937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.345061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.345087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.345252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.345280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.345414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.345442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.345580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.345606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.345769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.345794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.345941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.345967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.346091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.346125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.346260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.346285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.346409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.346436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.346564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.346589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.346774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.346800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.346953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.346979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.347114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.347141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.347263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.347290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.347439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.347465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.347592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.347617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.347750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.347776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.347904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.347929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.348062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.348087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.348222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.348248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.348384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.348410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.348562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.348588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.348739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.348765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.348902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.348928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.933 qpair failed and we were unable to recover it. 00:24:21.933 [2024-07-24 19:06:59.349079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.933 [2024-07-24 19:06:59.349111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.349331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.349356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.349496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.349522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.349675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.349701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.349865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.349890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.350022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.350048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.350189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.350215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.350364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.350389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.350613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.350639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.350790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.350816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.350978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.351003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.351184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.351211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.351341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.351372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.351521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.351547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.351669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.351695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.351828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.351854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.351976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.352002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.352127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.352153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.352282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.352307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.352457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.352483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.352644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.352670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.352785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.352810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.352934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.352960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.353123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.353149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.353280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.353306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.353441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.353466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.353602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.353629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.353769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.353794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.353986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.354012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.354144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.354171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.354320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.354345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.354483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.354509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.354630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.354655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.354807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.354833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.354970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.354996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.355172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.355200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.355325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.355352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.934 qpair failed and we were unable to recover it. 00:24:21.934 [2024-07-24 19:06:59.355479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.934 [2024-07-24 19:06:59.355505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.355656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.355681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.355815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.355840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.355987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.356013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.356157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.356182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.356304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.356330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.356451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.356477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.356613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.356640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.356823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.356848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.356971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.356996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.357150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.357177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.357312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.357337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.357488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.357513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.357672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.357697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.357813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.357838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.357954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.357984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.358146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.358172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.358316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.358340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.358457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.358482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.358616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.358642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.358796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.358822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.358955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.358981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.359114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.359140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.359300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.359325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.359458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.359484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.359601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.359626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.359748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.359773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.359929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.359955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.360115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.360141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.360274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.360299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.360459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.360485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.360605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.360630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.360778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.360804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.360956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.360982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.361126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.361152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.361307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.361332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.361463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.361488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.361617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.935 [2024-07-24 19:06:59.361644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.935 qpair failed and we were unable to recover it. 00:24:21.935 [2024-07-24 19:06:59.361765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.361790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.361936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.361961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.362111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.362138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.362268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.362293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.362433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.362459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.362585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.362611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.362746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.362770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.362921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.362946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.363072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.363097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.363287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.363312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.363441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.363466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.363620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.363646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.363821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.363845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.364020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.364046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.364179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.364205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.364328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.364353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.364485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.364512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.364638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.364669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.364831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.364857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.364979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.365007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.365136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.365161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.365282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.365306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.365428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.365454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.365579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.365605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.365758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.365783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.365904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.365929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.366076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.366107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.366242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.366267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.366392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.366417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.366564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.366589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.936 [2024-07-24 19:06:59.366706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.936 [2024-07-24 19:06:59.366731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.936 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.366872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.366897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.367020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.367046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.367187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.367212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.367358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.367383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.367535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.367562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.367690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.367716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.367862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.367888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.368021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.368056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.368178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.368204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.368323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.368347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.368473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.368498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.368632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.368657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.368772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.368797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.368987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.369038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.369190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.369219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.369384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.369411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.369647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.369673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.369822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.369848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.369989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.370017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.370150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.370177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.370326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.370352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.370484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.370510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.370639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.370667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.370785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.370812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.370959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.370985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.371141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.371169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.371290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.371324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.371451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.371478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.371627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.371653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.371793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.371818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.371943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.371968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.372130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.372159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.372313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.372340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.372459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.372485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.372617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.372643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.372848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.372875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.373120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.373156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.373287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.373313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.373429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.937 [2024-07-24 19:06:59.373455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.937 qpair failed and we were unable to recover it. 00:24:21.937 [2024-07-24 19:06:59.373587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.373613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.373777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.373813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.373948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.373974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.374110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.374135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.374286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.374311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.374427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.374454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.374571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.374596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.374756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.374782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.374928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.374953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.375081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.375113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.375254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.375280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.375414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.375439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.375567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.375593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.375741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.375766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.375910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.375935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.376083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.376114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.376239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.376264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.376391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.376416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.376559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.376584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.376712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.376739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.376878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.376903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.377057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.377082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.377242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.377268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.377396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.377429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.377582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.377609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.377730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.377756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.377914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.377939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.378056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.378085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.378245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.378286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.378535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.378562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.378701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.378728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.378897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.378924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.379055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.379081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.379235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.379274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.379414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.379441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.379561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.379597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.379719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.379744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.379871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.379897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.380029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.380054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.380185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.938 [2024-07-24 19:06:59.380211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.938 qpair failed and we were unable to recover it. 00:24:21.938 [2024-07-24 19:06:59.380404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.380430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.380555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.380580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.380744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.380770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.380923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.380949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.381088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.381122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.381250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.381276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.381394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.381420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.381580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.381605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.381736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.381761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.381910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.381935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.382056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.382081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.382274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.382300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.382423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.382447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.382576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.382601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.382745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.382770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.382911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.382936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.383053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.383079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.383202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.383228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.383344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.383369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.383492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.383517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.383660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.383690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.383815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.383840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.384006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.384030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.384184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.384210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.384339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.384363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.384493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.384518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.384670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.384695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.384844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.384868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.385056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.385081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.385208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.385233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.385365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.385391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.385516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.385541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.385695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.385729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.385872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.385896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.386013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.386038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.386186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.386212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.386347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.386373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.386502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.386527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.386681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.386706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.386828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.386853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.386999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.939 [2024-07-24 19:06:59.387024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.939 qpair failed and we were unable to recover it. 00:24:21.939 [2024-07-24 19:06:59.387149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.387175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.387309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.387334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.387484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.387509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.387668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.387693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.387807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.387832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.387982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.388007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.388195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.388222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.388338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.388363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.388482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.388507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.388703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.388728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.388862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.388887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.389017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.389041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.389182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.389207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.389363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.389388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.389534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.389564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.389700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.389725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.389872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.389897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.390023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.390050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.390201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.390226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.390369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.390394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.390521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.390545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.390665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.390690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.390818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.390852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.391009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.391033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.391188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.391214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.391363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.391387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.391544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.391569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.391699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.391725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.391847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.391871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.391995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.392020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.392156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.392183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.392305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.392330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.392493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.392518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.392641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.392667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.392791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.392816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.392950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.392975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.393111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.393137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.940 qpair failed and we were unable to recover it. 00:24:21.940 [2024-07-24 19:06:59.393295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.940 [2024-07-24 19:06:59.393320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.393452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.393477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.393624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.393649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.393824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.393848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.393979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.394009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.394127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.394152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.394273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.394298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.394440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.394464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.394585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.394610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.394756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.394781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.394918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.394943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.395096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.395126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.395267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.395292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.395406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.395430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.395546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.395571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.395742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.395767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.395881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.395905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.396032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.396057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.396187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.396213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.396344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.396369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.396504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.396533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.396691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.396716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.396836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.396860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.396987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.397012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.397139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.397166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.397282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.397307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.397457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.397485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.397607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.397632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.397762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.397788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.397907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.397932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.398052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.398079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.398230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.398278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.398437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.398466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.398619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.398646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.398776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.398802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.398928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.398954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.399091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.399127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.399261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.399286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.399433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.399459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.399589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.399622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.399744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.399771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.399907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.399933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.400051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.400076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.400199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.400224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.941 [2024-07-24 19:06:59.400419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.941 [2024-07-24 19:06:59.400444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.941 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.400570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.400595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.400747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.400772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.400897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.400922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.401042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.401068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.401202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.401227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.401357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.401382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.401506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.401530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.401678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.401703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.401858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.401883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.402028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.402052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.402184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.402210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.402348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.402373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.402519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.402544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.402695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.402724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.402897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.402922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.403050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.403076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.403207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.403232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.403348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.403373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.403497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.403523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.403660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.403685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.403830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.403855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.403992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.404016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.404168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.404213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.404361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.404388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.404520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.404546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.404703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.404730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.404865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.404891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.405043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.405069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.405208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.405235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.405364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.405390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.405517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.405542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.405660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.405686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.405837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.405863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.405980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.406005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.406132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.406159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.406304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.406329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.406454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.406480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.406637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.406662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.406785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.406811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.406937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.406964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.407093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.407129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.407266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.407291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.407446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.407471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.407598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.942 [2024-07-24 19:06:59.407625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.942 qpair failed and we were unable to recover it. 00:24:21.942 [2024-07-24 19:06:59.407744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.407769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.407902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.407928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.408110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.408136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.408268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.408293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.408423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.408448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.408583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.408608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.408732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.408757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.408907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.408931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.409046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.409070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.409206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.409232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.409370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.409395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.409542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.409567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.409703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.409728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.409858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.409883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.410013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.410038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.410159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.410184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.410307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.410332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.410481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.410506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.410636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.410660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.410787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.410811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.410977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.411016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.411187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.411215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.411341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.411367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.411498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.411529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.411683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.411708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.411832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.411858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.411986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.412012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.412135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.412161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.412317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.412342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.412468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.412493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.412644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.412669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.412790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.412815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.412959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.412983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.413112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.413137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.413257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.413282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.413400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.413425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.413552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.413576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.413713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.413738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.413864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.413890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.414036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.414060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.414202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.414228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.414351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.414376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.414510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.414535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.943 [2024-07-24 19:06:59.414656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.943 [2024-07-24 19:06:59.414681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.943 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.414839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.414864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.414993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.415018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.415140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.415165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.415285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.415311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.415475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.415500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.415638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.415663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.415796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.415835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.415980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.416007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.416132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.416163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.416281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.416307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.416447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.416473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.416597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.416623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.416747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.416775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.416899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.416925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.417045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.417069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.417218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.417244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.417372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.417398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.417542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.417567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.417708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.417733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.417883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.417908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.418049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.418074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.418227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.418255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.418430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.418456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.418575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.418599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.418721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.418746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.418897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.418922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.419043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.419067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.419235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.419262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.419402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.419427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.419574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.419599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.419724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.419749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.419870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.419894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.420012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.420037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.420160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.420190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.420325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.420350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.420483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.420508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.420662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.420686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.420814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.944 [2024-07-24 19:06:59.420839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.944 qpair failed and we were unable to recover it. 00:24:21.944 [2024-07-24 19:06:59.420969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.420994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.421117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.421142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.421264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.421289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.421407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.421432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.421581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.421606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.421751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.421776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.421908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.421933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.422054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.422078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.422227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.422252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.422402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.422427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.422551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.422576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.422703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.422728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.422869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.422894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.423067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.423092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.423241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.423266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.423384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.423409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.423532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.423557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.423687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.423714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.423844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.423868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.423985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.424010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.424170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.424197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.424350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.424375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.424506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.424531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.424675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.424701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.424862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.424887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.425012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.425038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.425159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.425184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.425331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.425356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.425490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.425515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.425635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.425660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.425778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.425803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.425931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.425956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.426117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.945 [2024-07-24 19:06:59.426142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.945 qpair failed and we were unable to recover it. 00:24:21.945 [2024-07-24 19:06:59.426266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.426290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.426408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.426433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.426559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.426583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.426709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.426734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.426876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.426900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.427041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.427066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.427214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.427240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.427375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.427400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.427520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.427545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.427664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.427689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.427839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.427863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.428010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.428035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.428158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.428184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.428314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.428339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.428456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.428481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.428607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.428632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.428749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.428774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.428923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.428949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.429066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.429091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.429259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.429298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.429431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.429458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.429585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.429610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.429733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.429759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.429882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.429906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.430027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.430052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.430184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.430210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.430333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.430358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.430479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.430504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.430653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.430678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.430797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.430822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.430942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.430967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.431091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.431126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.431283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.431308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.431434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.431458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.431595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.431620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.431738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.431762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.431893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.431918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.432050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.946 [2024-07-24 19:06:59.432076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.946 qpair failed and we were unable to recover it. 00:24:21.946 [2024-07-24 19:06:59.432220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.432245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.432373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.432398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.432545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.432569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.432698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.432723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.432845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.432870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.432999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.433027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.433167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.433193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.433322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.433347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.433467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.433493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.433626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.433650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.433765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.433790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.433911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.433937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.434052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.434076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.434228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.434254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.434372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.434397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.434515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.434540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.434659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.434685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.434801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.434825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.434941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.434965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.435095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.435125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.435270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.435296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.435420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.435444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.435591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.435617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.435736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.435761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.435882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.435908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.436030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.436055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.436205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.436246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.436375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.436401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.436525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.436550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.436676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.436703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.436829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.436855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.436978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.437003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.437154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.437187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.437315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.437340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.437470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.437495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.437646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.437672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.437793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.947 [2024-07-24 19:06:59.437817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.947 qpair failed and we were unable to recover it. 00:24:21.947 [2024-07-24 19:06:59.437946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.437970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.438111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.438149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.438280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.438305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.438423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.438448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.438600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.438624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.438751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.438776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.438929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.438956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.439076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.439100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.439275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.439300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.439440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.439465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.439579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.439603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.439724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.439749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.439874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.439900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.440051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.440076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.440221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.440247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.440386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.440412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.440542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.440566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.440689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.440716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.440844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.440870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.441016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.441040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.441183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.441208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.441336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.441362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.441544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.441582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.441708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.441736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.441869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.441894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.442018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.442043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.442167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.442193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.442379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.442403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.442522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.442547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.442672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.442697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.442814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.442838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.442958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.442983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.443134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.443168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.443296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.443321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.443441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.443465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 [2024-07-24 19:06:59.443593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.443618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:21.948 [2024-07-24 19:06:59.443764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.948 [2024-07-24 19:06:59.443789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.948 qpair failed and we were unable to recover it. 00:24:21.948 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:24:21.948 [2024-07-24 19:06:59.443914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.443940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:21.949 [2024-07-24 19:06:59.444064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.444089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.949 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:21.949 [2024-07-24 19:06:59.444232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.444257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.444378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.444403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.444527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.444551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.444679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.444704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.444819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.444845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.444965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.444989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.445109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.445135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.445272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.445297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.445445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.445470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.445587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.445611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.445765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.445790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.445951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.445976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.446114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.446139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.446266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.446291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.446418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.446443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.446569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.446594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.446742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.446767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.446896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.446921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.447051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.447077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.447239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.447278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.447404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.447431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.447572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.447597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.447728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.447754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.447914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.447939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.448091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.448126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.448266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.448292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.448456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.448483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.949 qpair failed and we were unable to recover it. 00:24:21.949 [2024-07-24 19:06:59.448604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.949 [2024-07-24 19:06:59.448628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.448752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.448780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.448931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.448956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.449076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.449109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.449240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.449267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.449398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.449423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.449575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.449601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.449731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.449758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.449898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.449922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.450051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.450077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.450215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.450241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.450374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.450399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.450522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.450547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.450677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.450702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.450857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.450883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.451014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.451040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.451184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.451209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.451327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.451352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.451471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.451495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.451659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.451685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.451820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.451846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.451965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.451995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.452165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.452192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.452313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.452339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.452486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.452511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.452666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.452691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.452814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.452840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.452969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.452994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.453140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.453165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.453291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.453317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.453434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.453459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.453581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.453607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.453753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.453778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.453905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.453931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.454056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.454082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.454239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.454265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.454396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.454422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.950 [2024-07-24 19:06:59.454538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.950 [2024-07-24 19:06:59.454564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.950 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.454690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.454716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.454868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.454894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.455011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.455036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.455160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.455187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.455317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.455343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.455483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.455508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.455664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.455691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.455837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.455862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.455986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.456011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.456133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.456166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.456292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.456318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.456456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.456482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.456619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.456645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.456765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.456791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.456915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.456942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.457086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.457117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.457252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.457278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.457414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.457440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.457583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.457608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.457733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.457760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.457937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.457963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.458090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.458120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.458257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.458284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.458439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.458469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.458593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.458618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.458739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.458764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.458903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.458927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.459053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.459080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.459220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.459246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.459373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.459397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.459524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.459549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.459681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.459706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.459828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.459854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.459984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.460010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.460138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.460170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.460292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.460317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.460480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.460505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.951 qpair failed and we were unable to recover it. 00:24:21.951 [2024-07-24 19:06:59.460636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.951 [2024-07-24 19:06:59.460664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.460813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.460839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.461014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.461039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.461166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.461191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.461321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.461347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.461469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.461494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.461622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.461647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.461823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.461849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.461965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.461990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.462113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.462150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.462277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.462302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.462475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.462501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.462628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.462653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.462815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.462841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.462957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.462981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.463118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.463143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.463269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.463294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.463448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.463474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.463593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.463618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.463753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.463779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.463914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.463940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.464097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.464136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.464270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.464296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.464436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.464461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.464580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.464605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.464728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.464753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.464900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.464930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.465062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.465087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.465253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.465279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.465408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.465433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.465562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.465587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.465706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.465731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.465845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.465872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.466032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.466056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.466191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.466217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.466348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.466374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.466523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.466549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.466672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.952 [2024-07-24 19:06:59.466698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.952 qpair failed and we were unable to recover it. 00:24:21.952 [2024-07-24 19:06:59.466828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.466854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.466967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.466992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.467145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.467172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.467307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.467333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.467460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.467487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.953 [2024-07-24 19:06:59.467610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.467638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.467754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.467781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:21.953 [2024-07-24 19:06:59.467931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.467957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.468073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.468098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.953 [2024-07-24 19:06:59.468263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.468290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:21.953 [2024-07-24 19:06:59.468408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.468434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.468588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.468613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.468789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.468815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.468970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.468995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.469122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.469149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.469270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.469296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.469415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.469442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.469569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.469595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.469713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.469739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.469857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.469882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.470011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.470037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.470186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.470213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.470330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.470355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.470474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.470499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.470635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.470660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.470788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.470814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.470940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.470970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.471098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.471142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.471256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.471281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.471420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.471458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.471593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.471619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.471752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.471777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.471891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.471916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.472034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.472059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.953 [2024-07-24 19:06:59.472213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.953 [2024-07-24 19:06:59.472250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.953 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.472397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.472422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.472538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.472563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.472708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.472733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.472869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.472895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.473017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.473042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.473177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.473202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.473331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.473356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.473511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.473535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.473686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.473710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.473832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.473857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.473986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.474011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.474138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.474164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.474294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.474319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.474446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.474471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.474591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.474616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.474733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.474758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.474875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.474900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.475026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.475051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.475211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.475246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.475365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.475390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.475508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.475533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.475685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.475710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:21.954 [2024-07-24 19:06:59.475822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.954 [2024-07-24 19:06:59.475847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:21.954 qpair failed and we were unable to recover it. 00:24:22.219 [2024-07-24 19:06:59.476005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.219 [2024-07-24 19:06:59.476030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.219 qpair failed and we were unable to recover it. 00:24:22.219 [2024-07-24 19:06:59.476171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.219 [2024-07-24 19:06:59.476196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.219 qpair failed and we were unable to recover it. 00:24:22.219 [2024-07-24 19:06:59.476319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.219 [2024-07-24 19:06:59.476344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.219 qpair failed and we were unable to recover it. 00:24:22.219 [2024-07-24 19:06:59.476482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.219 [2024-07-24 19:06:59.476507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.219 qpair failed and we were unable to recover it. 00:24:22.219 [2024-07-24 19:06:59.476659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.219 [2024-07-24 19:06:59.476683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.219 qpair failed and we were unable to recover it. 00:24:22.219 [2024-07-24 19:06:59.476805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.219 [2024-07-24 19:06:59.476830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.219 qpair failed and we were unable to recover it. 00:24:22.219 [2024-07-24 19:06:59.476956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.219 [2024-07-24 19:06:59.476982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.219 qpair failed and we were unable to recover it. 00:24:22.219 [2024-07-24 19:06:59.477099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.219 [2024-07-24 19:06:59.477132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.219 qpair failed and we were unable to recover it. 00:24:22.219 [2024-07-24 19:06:59.477273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.219 [2024-07-24 19:06:59.477298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.219 qpair failed and we were unable to recover it. 00:24:22.219 [2024-07-24 19:06:59.477464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.219 [2024-07-24 19:06:59.477505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.219 qpair failed and we were unable to recover it. 00:24:22.219 [2024-07-24 19:06:59.477634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.219 [2024-07-24 19:06:59.477661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.219 qpair failed and we were unable to recover it. 00:24:22.219 [2024-07-24 19:06:59.477789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.219 [2024-07-24 19:06:59.477816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.219 qpair failed and we were unable to recover it. 00:24:22.219 [2024-07-24 19:06:59.477945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.219 [2024-07-24 19:06:59.477971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.219 qpair failed and we were unable to recover it. 00:24:22.219 [2024-07-24 19:06:59.478097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.478132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.478264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.478291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.478420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.478446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.478566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.478591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.478718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.478743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.478863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.478889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.479010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.479035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.479183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.479210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.479355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.479382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.479520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.479551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.479706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.479732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.479880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.479905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.480059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.480083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.480245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.480271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.480395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.480419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.480588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.480613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.480747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.480771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.480923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.480947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.481097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.481127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.481267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.481292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.481441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.481466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.481642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.481666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.481785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.481810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.481965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.481989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.482143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.482176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.482319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.482344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.482491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.482515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.482653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.482678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.482810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.482837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.482974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.483000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.483153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.483179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.483319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.483345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.483467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.483492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.483644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.483669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.483820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.483845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.483973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.483999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.484189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.220 [2024-07-24 19:06:59.484231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.220 qpair failed and we were unable to recover it. 00:24:22.220 [2024-07-24 19:06:59.484457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.484484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.484637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.484663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.484799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.484825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.484955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.484980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.485097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.485130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.485262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.485288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.485437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.485463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.485632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.485658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.485792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.485817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.485943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.485968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.486122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.486149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.486296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.486322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.486473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.486504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.486672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.486698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.486845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.486871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.486989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.487015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.487157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.487184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.487328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.487355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.487490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.487515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.487675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.487702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.487866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.487892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.488012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.488037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.488160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.488187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.488320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.488346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.488501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.488527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.488731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.488756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.488918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.488943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.489059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.489084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.489225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.489250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.489378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.489403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.489553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.489579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.489783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.489808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.489990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.490014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.490152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.490178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.490306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.490332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.490447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.490472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.490683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.490709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.221 [2024-07-24 19:06:59.490861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.221 [2024-07-24 19:06:59.490886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.221 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.491007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.491033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.491233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.491286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.491437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.491465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.491604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.491630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.491766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.491793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.491952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.491992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.492145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.492191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.492325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.492352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.492477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.492503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.492655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.492680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.492845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.492869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.492993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.493019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.493188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.493227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.493356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.493382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.493507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.493538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.493697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.493724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.493865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.493890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.494023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.494048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.494192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.494219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.494352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.494380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.494529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.494555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.494682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.494707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.494835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.494861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.495003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.495042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.495203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.495231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.495354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.495380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.495513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.495540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.495672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.495697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.495833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.495861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.496005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.496032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.496181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.496208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.496335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.496359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.496505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.496532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.496669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.496695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.496824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.496848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.496973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.497000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 Malloc0 00:24:22.222 [2024-07-24 19:06:59.497135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.222 [2024-07-24 19:06:59.497161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.222 qpair failed and we were unable to recover it. 00:24:22.222 [2024-07-24 19:06:59.497289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.497315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.223 [2024-07-24 19:06:59.497476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.497502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:22.223 [2024-07-24 19:06:59.497624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.497650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.223 [2024-07-24 19:06:59.497809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.497834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:22.223 [2024-07-24 19:06:59.497961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.497987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.498143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.498169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.498291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.498317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.498469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.498494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.498648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.498672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.498789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.498813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.498941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.498968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.499085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.499115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.499248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.499273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.499391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.499416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.499590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.499616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.499739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.499765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.499907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.499933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.500090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.500123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.500262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.500286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.500408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.500434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.500552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.500578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.500697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.500711] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.223 [2024-07-24 19:06:59.500723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.500866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.500891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.501042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.501081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.501274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.501314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.501447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.501475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.501627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.501653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.501794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.501820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.501939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.501965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.502114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.502142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.502271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.502296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.502429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.502455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.502597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.502622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.502753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.502778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.502927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.502954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.503081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.223 [2024-07-24 19:06:59.503112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.223 qpair failed and we were unable to recover it. 00:24:22.223 [2024-07-24 19:06:59.503243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.503268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.503421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.503447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.503596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.503622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.503772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.503796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.503924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.503952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.504078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.504109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.504252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.504278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.504406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.504431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.504558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.504583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.504701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.504726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.504853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.504878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.505007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.505032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.505196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.505221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.505355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.505380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.505532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.505557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.505676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.505701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.505834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.505862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.506001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.506041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0718000b90 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.506198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.506242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.506407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.506434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.506563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.506588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.506753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.506777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.506901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.506926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.507067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.507092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.507244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.507269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.507402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.507428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.507567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.507593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.507731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.507756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.507884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.507912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.508047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.508072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.508205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.224 [2024-07-24 19:06:59.508239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.224 qpair failed and we were unable to recover it. 00:24:22.224 [2024-07-24 19:06:59.508366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.508391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.508513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.508538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.508703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.508732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.508867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.508892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.225 [2024-07-24 19:06:59.509014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.509040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:22.225 [2024-07-24 19:06:59.509172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.509198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.225 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:22.225 [2024-07-24 19:06:59.509331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.509358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.509492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.509518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.509649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.509674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.509809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.509833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.509980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.510005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.510130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.510158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.510284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.510309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.510439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.510464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.510601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.510627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.510775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.510803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.510932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.510959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.511085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.511115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.511251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.511276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.511429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.511453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.511583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.511608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.511732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.511758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.511883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.511908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.512054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.512079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.512218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.512244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.512379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.512404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.512535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.512560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.512707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.512734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.512870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.512896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.513015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.513039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.513169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.513195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.513354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.513379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.513517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.513543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.513667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.513693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.513840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.513865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.513994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.225 [2024-07-24 19:06:59.514018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.225 qpair failed and we were unable to recover it. 00:24:22.225 [2024-07-24 19:06:59.514147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.514175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.514326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.514351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.514476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.514501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.514629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.514654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.514773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.514799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.514926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.514951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.515106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.515134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.515294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.515320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.515447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.515471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.515595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.515621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.515737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.515761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.515883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.515907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.516028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.516054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.516212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.516251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.516412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.516440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.516603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.516628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.516791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.516817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.516950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.226 [2024-07-24 19:06:59.516976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.517118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.517145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:22.226 [2024-07-24 19:06:59.517277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.226 [2024-07-24 19:06:59.517315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:22.226 [2024-07-24 19:06:59.517454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.517480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.517606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.517631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.517783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.517808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.517926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.517950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.518093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.518135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.518281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.518309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.518436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.518463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.518609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.518633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.518756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.518782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.518904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.518929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.519058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.519083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.519240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.519267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.519391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.519415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.519538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.519563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.519719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.519745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.519880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.226 [2024-07-24 19:06:59.519904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.226 qpair failed and we were unable to recover it. 00:24:22.226 [2024-07-24 19:06:59.520019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.520044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.520176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.520202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.520328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.520353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.520509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.520534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.520659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.520685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.520834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.520859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.520984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.521008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.521133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.521163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.521306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.521331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.521489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.521514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.521646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.521671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.521816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.521841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.521976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.522014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.522183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.522211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.522339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.522365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.522492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.522517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.522636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.522661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.522837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.522861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.522987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.523012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.523146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.523172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.523311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.523337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.523476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.523501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.523655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.523680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.523826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.523850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.523977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.524002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.524128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.524154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.524282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.524307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.524450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.524488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0720000b90 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.524673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.524712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.524851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.524878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.227 [2024-07-24 19:06:59.525003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.525030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:22.227 [2024-07-24 19:06:59.525162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.525188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.227 [2024-07-24 19:06:59.525334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.525359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:22.227 [2024-07-24 19:06:59.525498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.525523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.525662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.525695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.525821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.227 [2024-07-24 19:06:59.525845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.227 qpair failed and we were unable to recover it. 00:24:22.227 [2024-07-24 19:06:59.525972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.228 [2024-07-24 19:06:59.525999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.228 qpair failed and we were unable to recover it. 00:24:22.228 [2024-07-24 19:06:59.526143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.228 [2024-07-24 19:06:59.526168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.228 qpair failed and we were unable to recover it. 00:24:22.228 [2024-07-24 19:06:59.526296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.228 [2024-07-24 19:06:59.526322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.228 qpair failed and we were unable to recover it. 00:24:22.228 [2024-07-24 19:06:59.526470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.228 [2024-07-24 19:06:59.526495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.228 qpair failed and we were unable to recover it. 00:24:22.228 [2024-07-24 19:06:59.526673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.228 [2024-07-24 19:06:59.526698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.228 qpair failed and we were unable to recover it. 00:24:22.228 [2024-07-24 19:06:59.526834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.228 [2024-07-24 19:06:59.526859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.228 qpair failed and we were unable to recover it. 00:24:22.228 [2024-07-24 19:06:59.527005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.228 [2024-07-24 19:06:59.527031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.228 qpair failed and we were unable to recover it. 00:24:22.228 [2024-07-24 19:06:59.527188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.228 [2024-07-24 19:06:59.527214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.228 qpair failed and we were unable to recover it. 00:24:22.228 [2024-07-24 19:06:59.527334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.228 [2024-07-24 19:06:59.527358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.228 qpair failed and we were unable to recover it. 00:24:22.228 [2024-07-24 19:06:59.527482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.228 [2024-07-24 19:06:59.527508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.228 qpair failed and we were unable to recover it. 00:24:22.228 [2024-07-24 19:06:59.527669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.228 [2024-07-24 19:06:59.527696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.228 qpair failed and we were unable to recover it. 00:24:22.228 [2024-07-24 19:06:59.527825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.228 [2024-07-24 19:06:59.527850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0728000b90 with addr=10.0.0.2, port=4420 00:24:22.228 qpair failed and we were unable to recover it. 00:24:22.228 [2024-07-24 19:06:59.527986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.228 [2024-07-24 19:06:59.528025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.228 qpair failed and we were unable to recover it. 00:24:22.228 [2024-07-24 19:06:59.528171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.228 [2024-07-24 19:06:59.528198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.228 qpair failed and we were unable to recover it. 00:24:22.228 [2024-07-24 19:06:59.528337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.228 [2024-07-24 19:06:59.528364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.228 qpair failed and we were unable to recover it. 00:24:22.228 [2024-07-24 19:06:59.528483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.228 [2024-07-24 19:06:59.528508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.228 qpair failed and we were unable to recover it. 00:24:22.228 [2024-07-24 19:06:59.528634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.228 [2024-07-24 19:06:59.528658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.228 qpair failed and we were unable to recover it. 00:24:22.228 [2024-07-24 19:06:59.528776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.228 [2024-07-24 19:06:59.528801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6250 with addr=10.0.0.2, port=4420 00:24:22.228 qpair failed and we were unable to recover it. 00:24:22.228 [2024-07-24 19:06:59.528996] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.228 [2024-07-24 19:06:59.531423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.228 [2024-07-24 19:06:59.531567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.228 [2024-07-24 19:06:59.531595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.228 [2024-07-24 19:06:59.531610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.228 [2024-07-24 19:06:59.531623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.228 [2024-07-24 19:06:59.531657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.228 qpair failed and we were unable to recover it. 00:24:22.228 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.228 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:22.228 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.228 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:22.228 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.228 19:06:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3252557 00:24:22.228 [2024-07-24 19:06:59.541344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.228 [2024-07-24 19:06:59.541489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.228 [2024-07-24 19:06:59.541516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.228 [2024-07-24 19:06:59.541530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.228 [2024-07-24 19:06:59.541543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.228 [2024-07-24 19:06:59.541586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.228 qpair failed and we were unable to recover it. 00:24:22.228 [2024-07-24 19:06:59.551334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.228 [2024-07-24 19:06:59.551499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.228 [2024-07-24 19:06:59.551526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.228 [2024-07-24 19:06:59.551541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.228 [2024-07-24 19:06:59.551554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.228 [2024-07-24 19:06:59.551583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.228 qpair failed and we were unable to recover it. 00:24:22.228 [2024-07-24 19:06:59.561357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.228 [2024-07-24 19:06:59.561497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.228 [2024-07-24 19:06:59.561523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.228 [2024-07-24 19:06:59.561538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.228 [2024-07-24 19:06:59.561551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.228 [2024-07-24 19:06:59.561581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.228 qpair failed and we were unable to recover it. 00:24:22.228 [2024-07-24 19:06:59.571328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.229 [2024-07-24 19:06:59.571486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.229 [2024-07-24 19:06:59.571513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.229 [2024-07-24 19:06:59.571527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.229 [2024-07-24 19:06:59.571540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.229 [2024-07-24 19:06:59.571569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.229 qpair failed and we were unable to recover it. 00:24:22.229 [2024-07-24 19:06:59.581390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.229 [2024-07-24 19:06:59.581533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.229 [2024-07-24 19:06:59.581559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.229 [2024-07-24 19:06:59.581574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.229 [2024-07-24 19:06:59.581587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.229 [2024-07-24 19:06:59.581616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.229 qpair failed and we were unable to recover it. 00:24:22.229 [2024-07-24 19:06:59.591467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.229 [2024-07-24 19:06:59.591614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.229 [2024-07-24 19:06:59.591640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.229 [2024-07-24 19:06:59.591655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.229 [2024-07-24 19:06:59.591667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.229 [2024-07-24 19:06:59.591697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.229 qpair failed and we were unable to recover it. 00:24:22.229 [2024-07-24 19:06:59.601413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.229 [2024-07-24 19:06:59.601546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.229 [2024-07-24 19:06:59.601572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.229 [2024-07-24 19:06:59.601587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.229 [2024-07-24 19:06:59.601601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.229 [2024-07-24 19:06:59.601630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.229 qpair failed and we were unable to recover it. 00:24:22.229 [2024-07-24 19:06:59.611432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.229 [2024-07-24 19:06:59.611587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.229 [2024-07-24 19:06:59.611613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.229 [2024-07-24 19:06:59.611628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.229 [2024-07-24 19:06:59.611642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.229 [2024-07-24 19:06:59.611671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.229 qpair failed and we were unable to recover it. 00:24:22.229 [2024-07-24 19:06:59.621443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.229 [2024-07-24 19:06:59.621578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.229 [2024-07-24 19:06:59.621604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.229 [2024-07-24 19:06:59.621618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.229 [2024-07-24 19:06:59.621637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.229 [2024-07-24 19:06:59.621668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.229 qpair failed and we were unable to recover it. 00:24:22.229 [2024-07-24 19:06:59.631469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.229 [2024-07-24 19:06:59.631595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.229 [2024-07-24 19:06:59.631621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.229 [2024-07-24 19:06:59.631636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.229 [2024-07-24 19:06:59.631649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.229 [2024-07-24 19:06:59.631679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.229 qpair failed and we were unable to recover it. 00:24:22.229 [2024-07-24 19:06:59.641524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.229 [2024-07-24 19:06:59.641656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.229 [2024-07-24 19:06:59.641682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.229 [2024-07-24 19:06:59.641696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.229 [2024-07-24 19:06:59.641709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.229 [2024-07-24 19:06:59.641739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.229 qpair failed and we were unable to recover it. 00:24:22.229 [2024-07-24 19:06:59.651570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.229 [2024-07-24 19:06:59.651694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.229 [2024-07-24 19:06:59.651720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.229 [2024-07-24 19:06:59.651735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.229 [2024-07-24 19:06:59.651748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.229 [2024-07-24 19:06:59.651777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.229 qpair failed and we were unable to recover it. 00:24:22.229 [2024-07-24 19:06:59.661590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.229 [2024-07-24 19:06:59.661721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.229 [2024-07-24 19:06:59.661747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.229 [2024-07-24 19:06:59.661762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.229 [2024-07-24 19:06:59.661775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.229 [2024-07-24 19:06:59.661804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.229 qpair failed and we were unable to recover it. 00:24:22.229 [2024-07-24 19:06:59.671655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.229 [2024-07-24 19:06:59.671782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.229 [2024-07-24 19:06:59.671808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.229 [2024-07-24 19:06:59.671823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.229 [2024-07-24 19:06:59.671836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.229 [2024-07-24 19:06:59.671867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.229 qpair failed and we were unable to recover it. 00:24:22.229 [2024-07-24 19:06:59.681598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.229 [2024-07-24 19:06:59.681731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.229 [2024-07-24 19:06:59.681757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.229 [2024-07-24 19:06:59.681772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.229 [2024-07-24 19:06:59.681786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.229 [2024-07-24 19:06:59.681817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.229 qpair failed and we were unable to recover it. 00:24:22.229 [2024-07-24 19:06:59.691679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.229 [2024-07-24 19:06:59.691804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.229 [2024-07-24 19:06:59.691830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.229 [2024-07-24 19:06:59.691845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.229 [2024-07-24 19:06:59.691858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.229 [2024-07-24 19:06:59.691888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.229 qpair failed and we were unable to recover it. 00:24:22.229 [2024-07-24 19:06:59.701704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.229 [2024-07-24 19:06:59.701829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.229 [2024-07-24 19:06:59.701855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.229 [2024-07-24 19:06:59.701869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.230 [2024-07-24 19:06:59.701882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.230 [2024-07-24 19:06:59.701912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.230 qpair failed and we were unable to recover it. 00:24:22.230 [2024-07-24 19:06:59.711734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.230 [2024-07-24 19:06:59.711871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.230 [2024-07-24 19:06:59.711897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.230 [2024-07-24 19:06:59.711917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.230 [2024-07-24 19:06:59.711931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.230 [2024-07-24 19:06:59.711961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.230 qpair failed and we were unable to recover it. 00:24:22.230 [2024-07-24 19:06:59.721717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.230 [2024-07-24 19:06:59.721851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.230 [2024-07-24 19:06:59.721876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.230 [2024-07-24 19:06:59.721892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.230 [2024-07-24 19:06:59.721904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.230 [2024-07-24 19:06:59.721933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.230 qpair failed and we were unable to recover it. 00:24:22.230 [2024-07-24 19:06:59.731786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.230 [2024-07-24 19:06:59.731912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.230 [2024-07-24 19:06:59.731938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.230 [2024-07-24 19:06:59.731952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.230 [2024-07-24 19:06:59.731965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.230 [2024-07-24 19:06:59.732007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.230 qpair failed and we were unable to recover it. 00:24:22.230 [2024-07-24 19:06:59.741886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.230 [2024-07-24 19:06:59.742009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.230 [2024-07-24 19:06:59.742034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.230 [2024-07-24 19:06:59.742049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.230 [2024-07-24 19:06:59.742062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.230 [2024-07-24 19:06:59.742091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.230 qpair failed and we were unable to recover it. 00:24:22.230 [2024-07-24 19:06:59.751865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.230 [2024-07-24 19:06:59.751991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.230 [2024-07-24 19:06:59.752017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.230 [2024-07-24 19:06:59.752032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.230 [2024-07-24 19:06:59.752044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.230 [2024-07-24 19:06:59.752073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.230 qpair failed and we were unable to recover it. 00:24:22.230 [2024-07-24 19:06:59.761838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.230 [2024-07-24 19:06:59.761979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.230 [2024-07-24 19:06:59.762005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.230 [2024-07-24 19:06:59.762019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.230 [2024-07-24 19:06:59.762032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.230 [2024-07-24 19:06:59.762064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.230 qpair failed and we were unable to recover it. 00:24:22.230 [2024-07-24 19:06:59.771904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.230 [2024-07-24 19:06:59.772034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.230 [2024-07-24 19:06:59.772061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.230 [2024-07-24 19:06:59.772081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.230 [2024-07-24 19:06:59.772094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.230 [2024-07-24 19:06:59.772135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.230 qpair failed and we were unable to recover it. 00:24:22.230 [2024-07-24 19:06:59.781936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.230 [2024-07-24 19:06:59.782067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.230 [2024-07-24 19:06:59.782094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.230 [2024-07-24 19:06:59.782118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.230 [2024-07-24 19:06:59.782134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.230 [2024-07-24 19:06:59.782165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.230 qpair failed and we were unable to recover it. 00:24:22.230 [2024-07-24 19:06:59.791955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.230 [2024-07-24 19:06:59.792077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.230 [2024-07-24 19:06:59.792111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.230 [2024-07-24 19:06:59.792128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.230 [2024-07-24 19:06:59.792142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.230 [2024-07-24 19:06:59.792185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.230 qpair failed and we were unable to recover it. 00:24:22.230 [2024-07-24 19:06:59.801980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.230 [2024-07-24 19:06:59.802125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.230 [2024-07-24 19:06:59.802157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.230 [2024-07-24 19:06:59.802173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.230 [2024-07-24 19:06:59.802186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.230 [2024-07-24 19:06:59.802216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.230 qpair failed and we were unable to recover it. 00:24:22.230 [2024-07-24 19:06:59.812010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.230 [2024-07-24 19:06:59.812154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.230 [2024-07-24 19:06:59.812181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.230 [2024-07-24 19:06:59.812196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.230 [2024-07-24 19:06:59.812218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.230 [2024-07-24 19:06:59.812262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.230 qpair failed and we were unable to recover it. 00:24:22.490 [2024-07-24 19:06:59.822044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.490 [2024-07-24 19:06:59.822171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.490 [2024-07-24 19:06:59.822198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.490 [2024-07-24 19:06:59.822212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.490 [2024-07-24 19:06:59.822225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.490 [2024-07-24 19:06:59.822255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.490 qpair failed and we were unable to recover it. 00:24:22.490 [2024-07-24 19:06:59.832066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.490 [2024-07-24 19:06:59.832206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.490 [2024-07-24 19:06:59.832232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.490 [2024-07-24 19:06:59.832246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.490 [2024-07-24 19:06:59.832260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.490 [2024-07-24 19:06:59.832289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.490 qpair failed and we were unable to recover it. 00:24:22.490 [2024-07-24 19:06:59.842088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.490 [2024-07-24 19:06:59.842231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.490 [2024-07-24 19:06:59.842257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.490 [2024-07-24 19:06:59.842271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.490 [2024-07-24 19:06:59.842284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.490 [2024-07-24 19:06:59.842320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.490 qpair failed and we were unable to recover it. 00:24:22.490 [2024-07-24 19:06:59.852093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.490 [2024-07-24 19:06:59.852234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.490 [2024-07-24 19:06:59.852260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.490 [2024-07-24 19:06:59.852274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.490 [2024-07-24 19:06:59.852287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.490 [2024-07-24 19:06:59.852316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.490 qpair failed and we were unable to recover it. 00:24:22.490 [2024-07-24 19:06:59.862150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.490 [2024-07-24 19:06:59.862278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.490 [2024-07-24 19:06:59.862305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.490 [2024-07-24 19:06:59.862320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.490 [2024-07-24 19:06:59.862332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.490 [2024-07-24 19:06:59.862376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.490 qpair failed and we were unable to recover it. 00:24:22.490 [2024-07-24 19:06:59.872189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.490 [2024-07-24 19:06:59.872316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.490 [2024-07-24 19:06:59.872342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.490 [2024-07-24 19:06:59.872357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.490 [2024-07-24 19:06:59.872369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.490 [2024-07-24 19:06:59.872400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.490 qpair failed and we were unable to recover it. 00:24:22.490 [2024-07-24 19:06:59.882182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.490 [2024-07-24 19:06:59.882357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.490 [2024-07-24 19:06:59.882383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.490 [2024-07-24 19:06:59.882397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.490 [2024-07-24 19:06:59.882410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.490 [2024-07-24 19:06:59.882439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.490 qpair failed and we were unable to recover it. 00:24:22.490 [2024-07-24 19:06:59.892211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.490 [2024-07-24 19:06:59.892331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.490 [2024-07-24 19:06:59.892361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.490 [2024-07-24 19:06:59.892377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.490 [2024-07-24 19:06:59.892390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.490 [2024-07-24 19:06:59.892432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.490 qpair failed and we were unable to recover it. 00:24:22.490 [2024-07-24 19:06:59.902265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.490 [2024-07-24 19:06:59.902390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.490 [2024-07-24 19:06:59.902416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.490 [2024-07-24 19:06:59.902430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.490 [2024-07-24 19:06:59.902443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.490 [2024-07-24 19:06:59.902472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.490 qpair failed and we were unable to recover it. 00:24:22.490 [2024-07-24 19:06:59.912305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.490 [2024-07-24 19:06:59.912433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.490 [2024-07-24 19:06:59.912459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.490 [2024-07-24 19:06:59.912474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.490 [2024-07-24 19:06:59.912487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.490 [2024-07-24 19:06:59.912516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.490 qpair failed and we were unable to recover it. 00:24:22.490 [2024-07-24 19:06:59.922310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.490 [2024-07-24 19:06:59.922440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.490 [2024-07-24 19:06:59.922466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.490 [2024-07-24 19:06:59.922480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.490 [2024-07-24 19:06:59.922493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.490 [2024-07-24 19:06:59.922522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.490 qpair failed and we were unable to recover it. 00:24:22.490 [2024-07-24 19:06:59.932330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.490 [2024-07-24 19:06:59.932455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.490 [2024-07-24 19:06:59.932480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.490 [2024-07-24 19:06:59.932495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.490 [2024-07-24 19:06:59.932508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.490 [2024-07-24 19:06:59.932543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.490 qpair failed and we were unable to recover it. 00:24:22.491 [2024-07-24 19:06:59.942339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.491 [2024-07-24 19:06:59.942459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.491 [2024-07-24 19:06:59.942484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.491 [2024-07-24 19:06:59.942499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.491 [2024-07-24 19:06:59.942512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.491 [2024-07-24 19:06:59.942541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.491 qpair failed and we were unable to recover it. 00:24:22.491 [2024-07-24 19:06:59.952416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.491 [2024-07-24 19:06:59.952554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.491 [2024-07-24 19:06:59.952581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.491 [2024-07-24 19:06:59.952595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.491 [2024-07-24 19:06:59.952608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.491 [2024-07-24 19:06:59.952637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.491 qpair failed and we were unable to recover it. 00:24:22.491 [2024-07-24 19:06:59.962414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.491 [2024-07-24 19:06:59.962544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.491 [2024-07-24 19:06:59.962569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.491 [2024-07-24 19:06:59.962584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.491 [2024-07-24 19:06:59.962597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.491 [2024-07-24 19:06:59.962626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.491 qpair failed and we were unable to recover it. 00:24:22.491 [2024-07-24 19:06:59.972413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.491 [2024-07-24 19:06:59.972535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.491 [2024-07-24 19:06:59.972560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.491 [2024-07-24 19:06:59.972575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.491 [2024-07-24 19:06:59.972587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.491 [2024-07-24 19:06:59.972616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.491 qpair failed and we were unable to recover it. 00:24:22.491 [2024-07-24 19:06:59.982461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.491 [2024-07-24 19:06:59.982603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.491 [2024-07-24 19:06:59.982630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.491 [2024-07-24 19:06:59.982646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.491 [2024-07-24 19:06:59.982659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.491 [2024-07-24 19:06:59.982702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.491 qpair failed and we were unable to recover it. 00:24:22.491 [2024-07-24 19:06:59.992510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.491 [2024-07-24 19:06:59.992640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.491 [2024-07-24 19:06:59.992666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.491 [2024-07-24 19:06:59.992681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.491 [2024-07-24 19:06:59.992694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.491 [2024-07-24 19:06:59.992724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.491 qpair failed and we were unable to recover it. 00:24:22.491 [2024-07-24 19:07:00.002569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.491 [2024-07-24 19:07:00.002724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.491 [2024-07-24 19:07:00.002759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.491 [2024-07-24 19:07:00.002779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.491 [2024-07-24 19:07:00.002797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.491 [2024-07-24 19:07:00.002836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.491 qpair failed and we were unable to recover it. 00:24:22.491 [2024-07-24 19:07:00.012567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.491 [2024-07-24 19:07:00.012706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.491 [2024-07-24 19:07:00.012736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.491 [2024-07-24 19:07:00.012755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.491 [2024-07-24 19:07:00.012769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.491 [2024-07-24 19:07:00.012801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.491 qpair failed and we were unable to recover it. 00:24:22.491 [2024-07-24 19:07:00.022577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.491 [2024-07-24 19:07:00.022704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.491 [2024-07-24 19:07:00.022731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.491 [2024-07-24 19:07:00.022746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.491 [2024-07-24 19:07:00.022765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.491 [2024-07-24 19:07:00.022796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.491 qpair failed and we were unable to recover it. 00:24:22.491 [2024-07-24 19:07:00.032612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.491 [2024-07-24 19:07:00.032734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.491 [2024-07-24 19:07:00.032760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.491 [2024-07-24 19:07:00.032775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.491 [2024-07-24 19:07:00.032788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.491 [2024-07-24 19:07:00.032817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.491 qpair failed and we were unable to recover it. 00:24:22.491 [2024-07-24 19:07:00.042681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.491 [2024-07-24 19:07:00.042810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.491 [2024-07-24 19:07:00.042836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.491 [2024-07-24 19:07:00.042851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.491 [2024-07-24 19:07:00.042863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.491 [2024-07-24 19:07:00.042894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.491 qpair failed and we were unable to recover it. 00:24:22.491 [2024-07-24 19:07:00.052682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.491 [2024-07-24 19:07:00.052814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.491 [2024-07-24 19:07:00.052841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.492 [2024-07-24 19:07:00.052856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.492 [2024-07-24 19:07:00.052872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.492 [2024-07-24 19:07:00.052903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.492 qpair failed and we were unable to recover it. 00:24:22.492 [2024-07-24 19:07:00.062715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.492 [2024-07-24 19:07:00.062835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.492 [2024-07-24 19:07:00.062861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.492 [2024-07-24 19:07:00.062876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.492 [2024-07-24 19:07:00.062889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0728000b90 00:24:22.492 [2024-07-24 19:07:00.062920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:22.492 qpair failed and we were unable to recover it. 00:24:22.492 [2024-07-24 19:07:00.072739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.492 [2024-07-24 19:07:00.072898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.492 [2024-07-24 19:07:00.072931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.492 [2024-07-24 19:07:00.072948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.492 [2024-07-24 19:07:00.072961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.492 [2024-07-24 19:07:00.072993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.492 qpair failed and we were unable to recover it. 00:24:22.492 [2024-07-24 19:07:00.082788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.492 [2024-07-24 19:07:00.082956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.492 [2024-07-24 19:07:00.082983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.492 [2024-07-24 19:07:00.082998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.492 [2024-07-24 19:07:00.083011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.492 [2024-07-24 19:07:00.083041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.492 qpair failed and we were unable to recover it. 00:24:22.751 [2024-07-24 19:07:00.092787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.751 [2024-07-24 19:07:00.092914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.751 [2024-07-24 19:07:00.092941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.751 [2024-07-24 19:07:00.092956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.751 [2024-07-24 19:07:00.092970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.751 [2024-07-24 19:07:00.093001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.751 qpair failed and we were unable to recover it. 00:24:22.751 [2024-07-24 19:07:00.102881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.751 [2024-07-24 19:07:00.103014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.751 [2024-07-24 19:07:00.103044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.751 [2024-07-24 19:07:00.103063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.751 [2024-07-24 19:07:00.103077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.751 [2024-07-24 19:07:00.103117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.751 qpair failed and we were unable to recover it. 00:24:22.751 [2024-07-24 19:07:00.112846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.751 [2024-07-24 19:07:00.112971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.751 [2024-07-24 19:07:00.112998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.751 [2024-07-24 19:07:00.113019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.751 [2024-07-24 19:07:00.113034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.751 [2024-07-24 19:07:00.113065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.751 qpair failed and we were unable to recover it. 00:24:22.751 [2024-07-24 19:07:00.122904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.751 [2024-07-24 19:07:00.123056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.751 [2024-07-24 19:07:00.123085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.751 [2024-07-24 19:07:00.123108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.751 [2024-07-24 19:07:00.123126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.751 [2024-07-24 19:07:00.123158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.751 qpair failed and we were unable to recover it. 00:24:22.751 [2024-07-24 19:07:00.132899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.751 [2024-07-24 19:07:00.133029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.751 [2024-07-24 19:07:00.133056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.751 [2024-07-24 19:07:00.133073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.751 [2024-07-24 19:07:00.133087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.751 [2024-07-24 19:07:00.133130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.751 qpair failed and we were unable to recover it. 00:24:22.751 [2024-07-24 19:07:00.142943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.751 [2024-07-24 19:07:00.143096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.751 [2024-07-24 19:07:00.143129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.751 [2024-07-24 19:07:00.143144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.751 [2024-07-24 19:07:00.143157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.751 [2024-07-24 19:07:00.143188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.751 qpair failed and we were unable to recover it. 00:24:22.751 [2024-07-24 19:07:00.152955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.751 [2024-07-24 19:07:00.153078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.751 [2024-07-24 19:07:00.153112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.751 [2024-07-24 19:07:00.153129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.751 [2024-07-24 19:07:00.153141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.751 [2024-07-24 19:07:00.153171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.751 qpair failed and we were unable to recover it. 00:24:22.751 [2024-07-24 19:07:00.163023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.751 [2024-07-24 19:07:00.163169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.751 [2024-07-24 19:07:00.163196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.751 [2024-07-24 19:07:00.163212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.751 [2024-07-24 19:07:00.163225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.751 [2024-07-24 19:07:00.163255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.751 qpair failed and we were unable to recover it. 00:24:22.751 [2024-07-24 19:07:00.173034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.751 [2024-07-24 19:07:00.173206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.751 [2024-07-24 19:07:00.173234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.751 [2024-07-24 19:07:00.173252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.751 [2024-07-24 19:07:00.173265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.751 [2024-07-24 19:07:00.173295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.751 qpair failed and we were unable to recover it. 00:24:22.751 [2024-07-24 19:07:00.183053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.751 [2024-07-24 19:07:00.183211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.751 [2024-07-24 19:07:00.183238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.751 [2024-07-24 19:07:00.183254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.751 [2024-07-24 19:07:00.183267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.751 [2024-07-24 19:07:00.183297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.751 qpair failed and we were unable to recover it. 00:24:22.751 [2024-07-24 19:07:00.193074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.751 [2024-07-24 19:07:00.193210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.751 [2024-07-24 19:07:00.193237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.751 [2024-07-24 19:07:00.193252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.751 [2024-07-24 19:07:00.193265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.751 [2024-07-24 19:07:00.193295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.751 qpair failed and we were unable to recover it. 00:24:22.752 [2024-07-24 19:07:00.203116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.752 [2024-07-24 19:07:00.203274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.752 [2024-07-24 19:07:00.203306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.752 [2024-07-24 19:07:00.203322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.752 [2024-07-24 19:07:00.203335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.752 [2024-07-24 19:07:00.203365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.752 qpair failed and we were unable to recover it. 00:24:22.752 [2024-07-24 19:07:00.213136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.752 [2024-07-24 19:07:00.213262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.752 [2024-07-24 19:07:00.213289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.752 [2024-07-24 19:07:00.213304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.752 [2024-07-24 19:07:00.213317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.752 [2024-07-24 19:07:00.213360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.752 qpair failed and we were unable to recover it. 00:24:22.752 [2024-07-24 19:07:00.223157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.752 [2024-07-24 19:07:00.223277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.752 [2024-07-24 19:07:00.223304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.752 [2024-07-24 19:07:00.223319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.752 [2024-07-24 19:07:00.223333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.752 [2024-07-24 19:07:00.223363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.752 qpair failed and we were unable to recover it. 00:24:22.752 [2024-07-24 19:07:00.233179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.752 [2024-07-24 19:07:00.233318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.752 [2024-07-24 19:07:00.233345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.752 [2024-07-24 19:07:00.233361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.752 [2024-07-24 19:07:00.233373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.752 [2024-07-24 19:07:00.233415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.752 qpair failed and we were unable to recover it. 00:24:22.752 [2024-07-24 19:07:00.243247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.752 [2024-07-24 19:07:00.243390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.752 [2024-07-24 19:07:00.243417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.752 [2024-07-24 19:07:00.243432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.752 [2024-07-24 19:07:00.243445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.752 [2024-07-24 19:07:00.243481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.752 qpair failed and we were unable to recover it. 00:24:22.752 [2024-07-24 19:07:00.253234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.752 [2024-07-24 19:07:00.253361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.752 [2024-07-24 19:07:00.253388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.752 [2024-07-24 19:07:00.253403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.752 [2024-07-24 19:07:00.253416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.752 [2024-07-24 19:07:00.253459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.752 qpair failed and we were unable to recover it. 00:24:22.752 [2024-07-24 19:07:00.263304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.752 [2024-07-24 19:07:00.263438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.752 [2024-07-24 19:07:00.263464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.752 [2024-07-24 19:07:00.263480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.752 [2024-07-24 19:07:00.263496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.752 [2024-07-24 19:07:00.263526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.752 qpair failed and we were unable to recover it. 00:24:22.752 [2024-07-24 19:07:00.273273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.752 [2024-07-24 19:07:00.273398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.752 [2024-07-24 19:07:00.273424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.752 [2024-07-24 19:07:00.273439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.752 [2024-07-24 19:07:00.273452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.752 [2024-07-24 19:07:00.273481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.752 qpair failed and we were unable to recover it. 00:24:22.752 [2024-07-24 19:07:00.283339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.752 [2024-07-24 19:07:00.283475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.752 [2024-07-24 19:07:00.283501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.752 [2024-07-24 19:07:00.283515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.752 [2024-07-24 19:07:00.283528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.752 [2024-07-24 19:07:00.283558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.752 qpair failed and we were unable to recover it. 00:24:22.752 [2024-07-24 19:07:00.293354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.752 [2024-07-24 19:07:00.293526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.752 [2024-07-24 19:07:00.293559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.752 [2024-07-24 19:07:00.293575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.752 [2024-07-24 19:07:00.293588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.752 [2024-07-24 19:07:00.293618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.752 qpair failed and we were unable to recover it. 00:24:22.752 [2024-07-24 19:07:00.303374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.752 [2024-07-24 19:07:00.303510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.752 [2024-07-24 19:07:00.303536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.752 [2024-07-24 19:07:00.303551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.752 [2024-07-24 19:07:00.303564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.752 [2024-07-24 19:07:00.303593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.752 qpair failed and we were unable to recover it. 00:24:22.752 [2024-07-24 19:07:00.313426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.752 [2024-07-24 19:07:00.313546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.752 [2024-07-24 19:07:00.313573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.752 [2024-07-24 19:07:00.313588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.752 [2024-07-24 19:07:00.313601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.752 [2024-07-24 19:07:00.313632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.752 qpair failed and we were unable to recover it. 00:24:22.752 [2024-07-24 19:07:00.323647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.752 [2024-07-24 19:07:00.323804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.752 [2024-07-24 19:07:00.323831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.752 [2024-07-24 19:07:00.323846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.752 [2024-07-24 19:07:00.323859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.752 [2024-07-24 19:07:00.323889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.752 qpair failed and we were unable to recover it. 00:24:22.752 [2024-07-24 19:07:00.333491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.752 [2024-07-24 19:07:00.333654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.753 [2024-07-24 19:07:00.333681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.753 [2024-07-24 19:07:00.333696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.753 [2024-07-24 19:07:00.333709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.753 [2024-07-24 19:07:00.333746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.753 qpair failed and we were unable to recover it. 00:24:22.753 [2024-07-24 19:07:00.343572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.753 [2024-07-24 19:07:00.343741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.753 [2024-07-24 19:07:00.343767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.753 [2024-07-24 19:07:00.343782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.753 [2024-07-24 19:07:00.343794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:22.753 [2024-07-24 19:07:00.343824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:22.753 qpair failed and we were unable to recover it. 00:24:23.012 [2024-07-24 19:07:00.353597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.012 [2024-07-24 19:07:00.353723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.012 [2024-07-24 19:07:00.353750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.012 [2024-07-24 19:07:00.353765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.012 [2024-07-24 19:07:00.353778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.012 [2024-07-24 19:07:00.353820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-07-24 19:07:00.363585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.012 [2024-07-24 19:07:00.363717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.012 [2024-07-24 19:07:00.363744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.012 [2024-07-24 19:07:00.363759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.012 [2024-07-24 19:07:00.363772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.012 [2024-07-24 19:07:00.363814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-07-24 19:07:00.373588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.012 [2024-07-24 19:07:00.373757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.012 [2024-07-24 19:07:00.373783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.012 [2024-07-24 19:07:00.373798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.012 [2024-07-24 19:07:00.373810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.012 [2024-07-24 19:07:00.373841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-07-24 19:07:00.383624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.012 [2024-07-24 19:07:00.383754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.012 [2024-07-24 19:07:00.383786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.012 [2024-07-24 19:07:00.383801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.012 [2024-07-24 19:07:00.383814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.012 [2024-07-24 19:07:00.383857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-07-24 19:07:00.393650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.012 [2024-07-24 19:07:00.393773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.012 [2024-07-24 19:07:00.393799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.012 [2024-07-24 19:07:00.393814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.012 [2024-07-24 19:07:00.393827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.012 [2024-07-24 19:07:00.393858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-07-24 19:07:00.403669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.012 [2024-07-24 19:07:00.403820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.012 [2024-07-24 19:07:00.403848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.012 [2024-07-24 19:07:00.403864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.012 [2024-07-24 19:07:00.403882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.012 [2024-07-24 19:07:00.403914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-07-24 19:07:00.413669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.012 [2024-07-24 19:07:00.413790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.012 [2024-07-24 19:07:00.413817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.012 [2024-07-24 19:07:00.413832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.012 [2024-07-24 19:07:00.413844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.012 [2024-07-24 19:07:00.413874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.012 qpair failed and we were unable to recover it. 00:24:23.012 [2024-07-24 19:07:00.423726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.013 [2024-07-24 19:07:00.423888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.013 [2024-07-24 19:07:00.423915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.013 [2024-07-24 19:07:00.423930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.013 [2024-07-24 19:07:00.423948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.013 [2024-07-24 19:07:00.423980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-07-24 19:07:00.433752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.013 [2024-07-24 19:07:00.433916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.013 [2024-07-24 19:07:00.433949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.013 [2024-07-24 19:07:00.433964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.013 [2024-07-24 19:07:00.433978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.013 [2024-07-24 19:07:00.434008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-07-24 19:07:00.443806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.013 [2024-07-24 19:07:00.443951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.013 [2024-07-24 19:07:00.443977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.013 [2024-07-24 19:07:00.443991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.013 [2024-07-24 19:07:00.444004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.013 [2024-07-24 19:07:00.444035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-07-24 19:07:00.453791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.013 [2024-07-24 19:07:00.453926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.013 [2024-07-24 19:07:00.453953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.013 [2024-07-24 19:07:00.453968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.013 [2024-07-24 19:07:00.453981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.013 [2024-07-24 19:07:00.454012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-07-24 19:07:00.463826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.013 [2024-07-24 19:07:00.463950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.013 [2024-07-24 19:07:00.463976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.013 [2024-07-24 19:07:00.463997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.013 [2024-07-24 19:07:00.464010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.013 [2024-07-24 19:07:00.464041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-07-24 19:07:00.473875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.013 [2024-07-24 19:07:00.474008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.013 [2024-07-24 19:07:00.474034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.013 [2024-07-24 19:07:00.474049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.013 [2024-07-24 19:07:00.474062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.013 [2024-07-24 19:07:00.474092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-07-24 19:07:00.483888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.013 [2024-07-24 19:07:00.484038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.013 [2024-07-24 19:07:00.484066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.013 [2024-07-24 19:07:00.484086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.013 [2024-07-24 19:07:00.484100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.013 [2024-07-24 19:07:00.484145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-07-24 19:07:00.493925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.013 [2024-07-24 19:07:00.494071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.013 [2024-07-24 19:07:00.494097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.013 [2024-07-24 19:07:00.494122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.013 [2024-07-24 19:07:00.494136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.013 [2024-07-24 19:07:00.494166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-07-24 19:07:00.503945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.013 [2024-07-24 19:07:00.504068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.013 [2024-07-24 19:07:00.504098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.013 [2024-07-24 19:07:00.504124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.013 [2024-07-24 19:07:00.504138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.013 [2024-07-24 19:07:00.504181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-07-24 19:07:00.513959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.013 [2024-07-24 19:07:00.514092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.013 [2024-07-24 19:07:00.514127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.013 [2024-07-24 19:07:00.514148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.013 [2024-07-24 19:07:00.514163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.013 [2024-07-24 19:07:00.514195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-07-24 19:07:00.524000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.013 [2024-07-24 19:07:00.524171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.013 [2024-07-24 19:07:00.524198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.013 [2024-07-24 19:07:00.524213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.013 [2024-07-24 19:07:00.524226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.013 [2024-07-24 19:07:00.524257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-07-24 19:07:00.534016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.013 [2024-07-24 19:07:00.534151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.013 [2024-07-24 19:07:00.534178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.013 [2024-07-24 19:07:00.534193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.013 [2024-07-24 19:07:00.534206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.013 [2024-07-24 19:07:00.534236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.013 qpair failed and we were unable to recover it. 00:24:23.013 [2024-07-24 19:07:00.544044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.013 [2024-07-24 19:07:00.544173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.013 [2024-07-24 19:07:00.544200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.013 [2024-07-24 19:07:00.544215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.013 [2024-07-24 19:07:00.544228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.014 [2024-07-24 19:07:00.544260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-07-24 19:07:00.554090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.014 [2024-07-24 19:07:00.554224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.014 [2024-07-24 19:07:00.554251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.014 [2024-07-24 19:07:00.554266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.014 [2024-07-24 19:07:00.554279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.014 [2024-07-24 19:07:00.554310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-07-24 19:07:00.564127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.014 [2024-07-24 19:07:00.564255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.014 [2024-07-24 19:07:00.564282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.014 [2024-07-24 19:07:00.564297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.014 [2024-07-24 19:07:00.564311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.014 [2024-07-24 19:07:00.564353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-07-24 19:07:00.574161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.014 [2024-07-24 19:07:00.574291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.014 [2024-07-24 19:07:00.574317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.014 [2024-07-24 19:07:00.574332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.014 [2024-07-24 19:07:00.574345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.014 [2024-07-24 19:07:00.574375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-07-24 19:07:00.584178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.014 [2024-07-24 19:07:00.584334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.014 [2024-07-24 19:07:00.584362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.014 [2024-07-24 19:07:00.584383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.014 [2024-07-24 19:07:00.584398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.014 [2024-07-24 19:07:00.584430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-07-24 19:07:00.594188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.014 [2024-07-24 19:07:00.594361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.014 [2024-07-24 19:07:00.594387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.014 [2024-07-24 19:07:00.594402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.014 [2024-07-24 19:07:00.594415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.014 [2024-07-24 19:07:00.594445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.014 [2024-07-24 19:07:00.604349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.014 [2024-07-24 19:07:00.604484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.014 [2024-07-24 19:07:00.604510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.014 [2024-07-24 19:07:00.604531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.014 [2024-07-24 19:07:00.604545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.014 [2024-07-24 19:07:00.604575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.014 qpair failed and we were unable to recover it. 00:24:23.272 [2024-07-24 19:07:00.614248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.272 [2024-07-24 19:07:00.614376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.272 [2024-07-24 19:07:00.614402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.272 [2024-07-24 19:07:00.614417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.272 [2024-07-24 19:07:00.614429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.272 [2024-07-24 19:07:00.614460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.272 qpair failed and we were unable to recover it. 00:24:23.272 [2024-07-24 19:07:00.624301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.272 [2024-07-24 19:07:00.624426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.272 [2024-07-24 19:07:00.624452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.272 [2024-07-24 19:07:00.624466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.272 [2024-07-24 19:07:00.624479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.272 [2024-07-24 19:07:00.624509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.272 qpair failed and we were unable to recover it. 00:24:23.272 [2024-07-24 19:07:00.634343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.273 [2024-07-24 19:07:00.634469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.273 [2024-07-24 19:07:00.634495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.273 [2024-07-24 19:07:00.634510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.273 [2024-07-24 19:07:00.634523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.273 [2024-07-24 19:07:00.634552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.273 qpair failed and we were unable to recover it. 00:24:23.273 [2024-07-24 19:07:00.644374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.273 [2024-07-24 19:07:00.644505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.273 [2024-07-24 19:07:00.644532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.273 [2024-07-24 19:07:00.644547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.273 [2024-07-24 19:07:00.644560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.273 [2024-07-24 19:07:00.644590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.273 qpair failed and we were unable to recover it. 00:24:23.273 [2024-07-24 19:07:00.654381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.273 [2024-07-24 19:07:00.654514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.273 [2024-07-24 19:07:00.654540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.273 [2024-07-24 19:07:00.654555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.273 [2024-07-24 19:07:00.654568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.273 [2024-07-24 19:07:00.654598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.273 qpair failed and we were unable to recover it. 00:24:23.273 [2024-07-24 19:07:00.664413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.273 [2024-07-24 19:07:00.664576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.273 [2024-07-24 19:07:00.664602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.273 [2024-07-24 19:07:00.664617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.273 [2024-07-24 19:07:00.664630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.273 [2024-07-24 19:07:00.664659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.273 qpair failed and we were unable to recover it. 00:24:23.273 [2024-07-24 19:07:00.674429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.273 [2024-07-24 19:07:00.674551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.273 [2024-07-24 19:07:00.674578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.273 [2024-07-24 19:07:00.674593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.273 [2024-07-24 19:07:00.674605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.273 [2024-07-24 19:07:00.674648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.273 qpair failed and we were unable to recover it. 00:24:23.273 [2024-07-24 19:07:00.684453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.273 [2024-07-24 19:07:00.684600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.273 [2024-07-24 19:07:00.684626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.273 [2024-07-24 19:07:00.684641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.273 [2024-07-24 19:07:00.684654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.273 [2024-07-24 19:07:00.684684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.273 qpair failed and we were unable to recover it. 00:24:23.273 [2024-07-24 19:07:00.694458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.273 [2024-07-24 19:07:00.694584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.273 [2024-07-24 19:07:00.694615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.273 [2024-07-24 19:07:00.694631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.273 [2024-07-24 19:07:00.694644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.273 [2024-07-24 19:07:00.694674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.273 qpair failed and we were unable to recover it. 00:24:23.273 [2024-07-24 19:07:00.704520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.273 [2024-07-24 19:07:00.704648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.273 [2024-07-24 19:07:00.704675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.273 [2024-07-24 19:07:00.704691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.273 [2024-07-24 19:07:00.704707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.273 [2024-07-24 19:07:00.704737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.273 qpair failed and we were unable to recover it. 00:24:23.273 [2024-07-24 19:07:00.714603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.273 [2024-07-24 19:07:00.714755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.273 [2024-07-24 19:07:00.714781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.273 [2024-07-24 19:07:00.714796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.273 [2024-07-24 19:07:00.714809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.273 [2024-07-24 19:07:00.714839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.273 qpair failed and we were unable to recover it. 00:24:23.273 [2024-07-24 19:07:00.724598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.273 [2024-07-24 19:07:00.724738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.273 [2024-07-24 19:07:00.724764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.273 [2024-07-24 19:07:00.724779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.273 [2024-07-24 19:07:00.724792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.273 [2024-07-24 19:07:00.724821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.273 qpair failed and we were unable to recover it. 00:24:23.273 [2024-07-24 19:07:00.734626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.273 [2024-07-24 19:07:00.734755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.273 [2024-07-24 19:07:00.734781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.273 [2024-07-24 19:07:00.734796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.273 [2024-07-24 19:07:00.734809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.273 [2024-07-24 19:07:00.734845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.273 qpair failed and we were unable to recover it. 00:24:23.273 [2024-07-24 19:07:00.744653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.273 [2024-07-24 19:07:00.744781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.273 [2024-07-24 19:07:00.744809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.273 [2024-07-24 19:07:00.744824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.273 [2024-07-24 19:07:00.744837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.273 [2024-07-24 19:07:00.744867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.273 qpair failed and we were unable to recover it. 00:24:23.273 [2024-07-24 19:07:00.754673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.273 [2024-07-24 19:07:00.754798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.273 [2024-07-24 19:07:00.754824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.273 [2024-07-24 19:07:00.754839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.273 [2024-07-24 19:07:00.754852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.273 [2024-07-24 19:07:00.754881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.273 qpair failed and we were unable to recover it. 00:24:23.273 [2024-07-24 19:07:00.764764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.273 [2024-07-24 19:07:00.764911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.273 [2024-07-24 19:07:00.764940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.273 [2024-07-24 19:07:00.764959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.274 [2024-07-24 19:07:00.764973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.274 [2024-07-24 19:07:00.765005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.274 qpair failed and we were unable to recover it. 00:24:23.274 [2024-07-24 19:07:00.774738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.274 [2024-07-24 19:07:00.774870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.274 [2024-07-24 19:07:00.774896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.274 [2024-07-24 19:07:00.774911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.274 [2024-07-24 19:07:00.774925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.274 [2024-07-24 19:07:00.774954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.274 qpair failed and we were unable to recover it. 00:24:23.274 [2024-07-24 19:07:00.784733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.274 [2024-07-24 19:07:00.784906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.274 [2024-07-24 19:07:00.784937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.274 [2024-07-24 19:07:00.784953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.274 [2024-07-24 19:07:00.784966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.274 [2024-07-24 19:07:00.784996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.274 qpair failed and we were unable to recover it. 00:24:23.274 [2024-07-24 19:07:00.794780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.274 [2024-07-24 19:07:00.794902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.274 [2024-07-24 19:07:00.794928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.274 [2024-07-24 19:07:00.794943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.274 [2024-07-24 19:07:00.794957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.274 [2024-07-24 19:07:00.794988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.274 qpair failed and we were unable to recover it. 00:24:23.274 [2024-07-24 19:07:00.804779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.274 [2024-07-24 19:07:00.804905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.274 [2024-07-24 19:07:00.804931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.274 [2024-07-24 19:07:00.804946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.274 [2024-07-24 19:07:00.804959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.274 [2024-07-24 19:07:00.804988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.274 qpair failed and we were unable to recover it. 00:24:23.274 [2024-07-24 19:07:00.814793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.274 [2024-07-24 19:07:00.814914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.274 [2024-07-24 19:07:00.814939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.274 [2024-07-24 19:07:00.814954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.274 [2024-07-24 19:07:00.814967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.274 [2024-07-24 19:07:00.814996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.274 qpair failed and we were unable to recover it. 00:24:23.274 [2024-07-24 19:07:00.824825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.274 [2024-07-24 19:07:00.824977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.274 [2024-07-24 19:07:00.825002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.274 [2024-07-24 19:07:00.825017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.274 [2024-07-24 19:07:00.825036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.274 [2024-07-24 19:07:00.825068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.274 qpair failed and we were unable to recover it. 00:24:23.274 [2024-07-24 19:07:00.834896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.274 [2024-07-24 19:07:00.835019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.274 [2024-07-24 19:07:00.835045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.274 [2024-07-24 19:07:00.835060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.274 [2024-07-24 19:07:00.835073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.274 [2024-07-24 19:07:00.835110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.274 qpair failed and we were unable to recover it. 00:24:23.274 [2024-07-24 19:07:00.844895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.274 [2024-07-24 19:07:00.845049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.274 [2024-07-24 19:07:00.845074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.274 [2024-07-24 19:07:00.845089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.274 [2024-07-24 19:07:00.845108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.274 [2024-07-24 19:07:00.845141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.274 qpair failed and we were unable to recover it. 00:24:23.274 [2024-07-24 19:07:00.854952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.274 [2024-07-24 19:07:00.855099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.274 [2024-07-24 19:07:00.855133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.274 [2024-07-24 19:07:00.855149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.274 [2024-07-24 19:07:00.855162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.274 [2024-07-24 19:07:00.855192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.274 qpair failed and we were unable to recover it. 00:24:23.274 [2024-07-24 19:07:00.864982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.274 [2024-07-24 19:07:00.865152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.274 [2024-07-24 19:07:00.865178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.274 [2024-07-24 19:07:00.865192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.274 [2024-07-24 19:07:00.865205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.274 [2024-07-24 19:07:00.865235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.274 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:00.874994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:00.875136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:00.875163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:00.875178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:00.875192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:00.875223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:00.885037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:00.885182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:00.885208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:00.885223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:00.885235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:00.885266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:00.895034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:00.895171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:00.895197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:00.895212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:00.895225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:00.895257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:00.905112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:00.905247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:00.905273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:00.905288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:00.905301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:00.905330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:00.915119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:00.915252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:00.915278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:00.915299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:00.915313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:00.915346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:00.925183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:00.925356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:00.925383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:00.925398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:00.925411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:00.925443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:00.935181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:00.935332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:00.935358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:00.935373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:00.935386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:00.935416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:00.945171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:00.945297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:00.945323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:00.945338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:00.945351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:00.945381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:00.955215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:00.955340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:00.955367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:00.955382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:00.955395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:00.955427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:00.965236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:00.965360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:00.965386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:00.965401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:00.965414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:00.965444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:00.975287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:00.975410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:00.975436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:00.975450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:00.975464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:00.975493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:00.985298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:00.985420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:00.985445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:00.985459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:00.985472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:00.985503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:00.995336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:00.995461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:00.995487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:00.995502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:00.995515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:00.995544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:01.005371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:01.005514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:01.005540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:01.005561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:01.005575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:01.005606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:01.015398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:01.015525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:01.015551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:01.015566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:01.015578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:01.015608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:01.025420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:01.025544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:01.025569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:01.025583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:01.025597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:01.025626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:01.035439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:01.035585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:01.035610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:01.035625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:01.035638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:01.035669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:01.045463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:01.045637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:01.045663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:01.045678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:01.045690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:01.045720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:01.055477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:01.055603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:01.055629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:01.055643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:01.055656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:01.055685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:01.065603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:01.065723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:01.065749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:01.065764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:01.065777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:01.065806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:01.075565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:01.075694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:01.075721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:01.075736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:01.075752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:01.075782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:01.085601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:01.085729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:01.085755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:01.085770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:01.085782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:01.085824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:01.095610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.532 [2024-07-24 19:07:01.095732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.532 [2024-07-24 19:07:01.095764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.532 [2024-07-24 19:07:01.095780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.532 [2024-07-24 19:07:01.095793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.532 [2024-07-24 19:07:01.095823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.532 qpair failed and we were unable to recover it. 00:24:23.532 [2024-07-24 19:07:01.105657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.533 [2024-07-24 19:07:01.105785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.533 [2024-07-24 19:07:01.105811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.533 [2024-07-24 19:07:01.105826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.533 [2024-07-24 19:07:01.105839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.533 [2024-07-24 19:07:01.105868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.533 qpair failed and we were unable to recover it. 00:24:23.533 [2024-07-24 19:07:01.115678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.533 [2024-07-24 19:07:01.115809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.533 [2024-07-24 19:07:01.115836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.533 [2024-07-24 19:07:01.115851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.533 [2024-07-24 19:07:01.115867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.533 [2024-07-24 19:07:01.115896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.533 qpair failed and we were unable to recover it. 00:24:23.533 [2024-07-24 19:07:01.125705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.533 [2024-07-24 19:07:01.125832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.533 [2024-07-24 19:07:01.125857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.533 [2024-07-24 19:07:01.125871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.533 [2024-07-24 19:07:01.125883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.533 [2024-07-24 19:07:01.125913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.533 qpair failed and we were unable to recover it. 00:24:23.791 [2024-07-24 19:07:01.135756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.791 [2024-07-24 19:07:01.135921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.791 [2024-07-24 19:07:01.135947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.791 [2024-07-24 19:07:01.135962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.791 [2024-07-24 19:07:01.135976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.791 [2024-07-24 19:07:01.136012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.791 qpair failed and we were unable to recover it. 00:24:23.791 [2024-07-24 19:07:01.145740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.791 [2024-07-24 19:07:01.145872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.791 [2024-07-24 19:07:01.145898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.791 [2024-07-24 19:07:01.145913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.791 [2024-07-24 19:07:01.145926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.791 [2024-07-24 19:07:01.145956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.791 qpair failed and we were unable to recover it. 00:24:23.791 [2024-07-24 19:07:01.155763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.791 [2024-07-24 19:07:01.155900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.791 [2024-07-24 19:07:01.155926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.791 [2024-07-24 19:07:01.155940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.791 [2024-07-24 19:07:01.155953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.791 [2024-07-24 19:07:01.155984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.791 qpair failed and we were unable to recover it. 00:24:23.791 [2024-07-24 19:07:01.165838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.791 [2024-07-24 19:07:01.165971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.791 [2024-07-24 19:07:01.165998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.791 [2024-07-24 19:07:01.166014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.791 [2024-07-24 19:07:01.166027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.791 [2024-07-24 19:07:01.166070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.791 qpair failed and we were unable to recover it. 00:24:23.791 [2024-07-24 19:07:01.175821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.791 [2024-07-24 19:07:01.175952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.791 [2024-07-24 19:07:01.175978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.791 [2024-07-24 19:07:01.175993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.791 [2024-07-24 19:07:01.176006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.791 [2024-07-24 19:07:01.176035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.791 qpair failed and we were unable to recover it. 00:24:23.791 [2024-07-24 19:07:01.185848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.791 [2024-07-24 19:07:01.185973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.791 [2024-07-24 19:07:01.186005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.791 [2024-07-24 19:07:01.186020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.792 [2024-07-24 19:07:01.186033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.792 [2024-07-24 19:07:01.186062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.792 qpair failed and we were unable to recover it. 00:24:23.792 [2024-07-24 19:07:01.195881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.792 [2024-07-24 19:07:01.196009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.792 [2024-07-24 19:07:01.196035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.792 [2024-07-24 19:07:01.196050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.792 [2024-07-24 19:07:01.196062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.792 [2024-07-24 19:07:01.196093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.792 qpair failed and we were unable to recover it. 00:24:23.792 [2024-07-24 19:07:01.205941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.792 [2024-07-24 19:07:01.206091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.792 [2024-07-24 19:07:01.206124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.792 [2024-07-24 19:07:01.206139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.792 [2024-07-24 19:07:01.206153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.792 [2024-07-24 19:07:01.206183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.792 qpair failed and we were unable to recover it. 00:24:23.792 [2024-07-24 19:07:01.215934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.792 [2024-07-24 19:07:01.216081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.792 [2024-07-24 19:07:01.216114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.792 [2024-07-24 19:07:01.216131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.792 [2024-07-24 19:07:01.216144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.792 [2024-07-24 19:07:01.216186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.792 qpair failed and we were unable to recover it. 00:24:23.792 [2024-07-24 19:07:01.225987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.792 [2024-07-24 19:07:01.226130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.792 [2024-07-24 19:07:01.226156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.792 [2024-07-24 19:07:01.226171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.792 [2024-07-24 19:07:01.226190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.792 [2024-07-24 19:07:01.226223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.792 qpair failed and we were unable to recover it. 00:24:23.792 [2024-07-24 19:07:01.236009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.792 [2024-07-24 19:07:01.236143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.792 [2024-07-24 19:07:01.236169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.792 [2024-07-24 19:07:01.236184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.792 [2024-07-24 19:07:01.236197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.792 [2024-07-24 19:07:01.236226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.792 qpair failed and we were unable to recover it. 00:24:23.792 [2024-07-24 19:07:01.246078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.792 [2024-07-24 19:07:01.246220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.792 [2024-07-24 19:07:01.246258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.792 [2024-07-24 19:07:01.246273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.792 [2024-07-24 19:07:01.246286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.792 [2024-07-24 19:07:01.246318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.792 qpair failed and we were unable to recover it. 00:24:23.792 [2024-07-24 19:07:01.256073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.792 [2024-07-24 19:07:01.256216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.792 [2024-07-24 19:07:01.256243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.792 [2024-07-24 19:07:01.256258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.792 [2024-07-24 19:07:01.256271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.792 [2024-07-24 19:07:01.256301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.792 qpair failed and we were unable to recover it. 00:24:23.792 [2024-07-24 19:07:01.266086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.792 [2024-07-24 19:07:01.266251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.792 [2024-07-24 19:07:01.266278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.792 [2024-07-24 19:07:01.266293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.792 [2024-07-24 19:07:01.266307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.792 [2024-07-24 19:07:01.266336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.792 qpair failed and we were unable to recover it. 00:24:23.792 [2024-07-24 19:07:01.276112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.792 [2024-07-24 19:07:01.276241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.792 [2024-07-24 19:07:01.276267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.792 [2024-07-24 19:07:01.276282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.792 [2024-07-24 19:07:01.276294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.792 [2024-07-24 19:07:01.276338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.792 qpair failed and we were unable to recover it. 00:24:23.792 [2024-07-24 19:07:01.286153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.793 [2024-07-24 19:07:01.286283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.793 [2024-07-24 19:07:01.286309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.793 [2024-07-24 19:07:01.286324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.793 [2024-07-24 19:07:01.286336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.793 [2024-07-24 19:07:01.286367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.793 qpair failed and we were unable to recover it. 00:24:23.793 [2024-07-24 19:07:01.296194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.793 [2024-07-24 19:07:01.296331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.793 [2024-07-24 19:07:01.296358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.793 [2024-07-24 19:07:01.296372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.793 [2024-07-24 19:07:01.296385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.793 [2024-07-24 19:07:01.296428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.793 qpair failed and we were unable to recover it. 00:24:23.793 [2024-07-24 19:07:01.306187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.793 [2024-07-24 19:07:01.306314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.793 [2024-07-24 19:07:01.306341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.793 [2024-07-24 19:07:01.306355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.793 [2024-07-24 19:07:01.306369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.793 [2024-07-24 19:07:01.306399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.793 qpair failed and we were unable to recover it. 00:24:23.793 [2024-07-24 19:07:01.316236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.793 [2024-07-24 19:07:01.316361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.793 [2024-07-24 19:07:01.316387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.793 [2024-07-24 19:07:01.316402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.793 [2024-07-24 19:07:01.316420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.793 [2024-07-24 19:07:01.316452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.793 qpair failed and we were unable to recover it. 00:24:23.793 [2024-07-24 19:07:01.326240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.793 [2024-07-24 19:07:01.326361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.793 [2024-07-24 19:07:01.326387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.793 [2024-07-24 19:07:01.326402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.793 [2024-07-24 19:07:01.326415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.793 [2024-07-24 19:07:01.326445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.793 qpair failed and we were unable to recover it. 00:24:23.793 [2024-07-24 19:07:01.336259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.793 [2024-07-24 19:07:01.336388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.793 [2024-07-24 19:07:01.336414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.793 [2024-07-24 19:07:01.336430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.793 [2024-07-24 19:07:01.336442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.793 [2024-07-24 19:07:01.336472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.793 qpair failed and we were unable to recover it. 00:24:23.793 [2024-07-24 19:07:01.346333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.793 [2024-07-24 19:07:01.346467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.793 [2024-07-24 19:07:01.346495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.793 [2024-07-24 19:07:01.346515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.793 [2024-07-24 19:07:01.346528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.793 [2024-07-24 19:07:01.346559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.793 qpair failed and we were unable to recover it. 00:24:23.793 [2024-07-24 19:07:01.356311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.793 [2024-07-24 19:07:01.356432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.793 [2024-07-24 19:07:01.356459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.793 [2024-07-24 19:07:01.356474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.793 [2024-07-24 19:07:01.356487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.793 [2024-07-24 19:07:01.356518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.793 qpair failed and we were unable to recover it. 00:24:23.793 [2024-07-24 19:07:01.366357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.793 [2024-07-24 19:07:01.366487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.793 [2024-07-24 19:07:01.366513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.793 [2024-07-24 19:07:01.366528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.793 [2024-07-24 19:07:01.366541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.793 [2024-07-24 19:07:01.366574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.793 qpair failed and we were unable to recover it. 00:24:23.793 [2024-07-24 19:07:01.376410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.793 [2024-07-24 19:07:01.376574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.793 [2024-07-24 19:07:01.376600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.793 [2024-07-24 19:07:01.376614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.793 [2024-07-24 19:07:01.376627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.794 [2024-07-24 19:07:01.376657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.794 qpair failed and we were unable to recover it. 00:24:23.794 [2024-07-24 19:07:01.386426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.794 [2024-07-24 19:07:01.386551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.794 [2024-07-24 19:07:01.386577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.794 [2024-07-24 19:07:01.386592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.794 [2024-07-24 19:07:01.386605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:23.794 [2024-07-24 19:07:01.386636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:23.794 qpair failed and we were unable to recover it. 00:24:24.053 [2024-07-24 19:07:01.396442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.053 [2024-07-24 19:07:01.396569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.053 [2024-07-24 19:07:01.396594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.053 [2024-07-24 19:07:01.396609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.053 [2024-07-24 19:07:01.396622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.053 [2024-07-24 19:07:01.396653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.053 qpair failed and we were unable to recover it. 00:24:24.053 [2024-07-24 19:07:01.406581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.053 [2024-07-24 19:07:01.406727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.053 [2024-07-24 19:07:01.406753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.053 [2024-07-24 19:07:01.406773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.053 [2024-07-24 19:07:01.406788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.053 [2024-07-24 19:07:01.406818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.053 qpair failed and we were unable to recover it. 00:24:24.053 [2024-07-24 19:07:01.416488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.053 [2024-07-24 19:07:01.416612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.053 [2024-07-24 19:07:01.416637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.053 [2024-07-24 19:07:01.416652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.053 [2024-07-24 19:07:01.416665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.053 [2024-07-24 19:07:01.416695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.053 qpair failed and we were unable to recover it. 00:24:24.053 [2024-07-24 19:07:01.426556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.053 [2024-07-24 19:07:01.426676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.053 [2024-07-24 19:07:01.426702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.053 [2024-07-24 19:07:01.426716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.053 [2024-07-24 19:07:01.426730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.053 [2024-07-24 19:07:01.426759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.053 qpair failed and we were unable to recover it. 00:24:24.053 [2024-07-24 19:07:01.436680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.053 [2024-07-24 19:07:01.436808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.053 [2024-07-24 19:07:01.436834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.053 [2024-07-24 19:07:01.436849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.053 [2024-07-24 19:07:01.436862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.053 [2024-07-24 19:07:01.436892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.053 qpair failed and we were unable to recover it. 00:24:24.053 [2024-07-24 19:07:01.446625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.053 [2024-07-24 19:07:01.446758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.053 [2024-07-24 19:07:01.446783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.053 [2024-07-24 19:07:01.446798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.053 [2024-07-24 19:07:01.446811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.053 [2024-07-24 19:07:01.446841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.053 qpair failed and we were unable to recover it. 00:24:24.053 [2024-07-24 19:07:01.456623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.053 [2024-07-24 19:07:01.456749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.053 [2024-07-24 19:07:01.456775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.053 [2024-07-24 19:07:01.456789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.053 [2024-07-24 19:07:01.456801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.053 [2024-07-24 19:07:01.456831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.053 qpair failed and we were unable to recover it. 00:24:24.053 [2024-07-24 19:07:01.466640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.053 [2024-07-24 19:07:01.466762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.053 [2024-07-24 19:07:01.466787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.053 [2024-07-24 19:07:01.466802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.053 [2024-07-24 19:07:01.466815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.053 [2024-07-24 19:07:01.466858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.053 qpair failed and we were unable to recover it. 00:24:24.053 [2024-07-24 19:07:01.476690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.053 [2024-07-24 19:07:01.476820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.053 [2024-07-24 19:07:01.476846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.053 [2024-07-24 19:07:01.476861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.053 [2024-07-24 19:07:01.476873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.053 [2024-07-24 19:07:01.476904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.053 qpair failed and we were unable to recover it. 00:24:24.053 [2024-07-24 19:07:01.486753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.053 [2024-07-24 19:07:01.486899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.053 [2024-07-24 19:07:01.486928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.053 [2024-07-24 19:07:01.486946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.053 [2024-07-24 19:07:01.486960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.053 [2024-07-24 19:07:01.487002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.053 qpair failed and we were unable to recover it. 00:24:24.053 [2024-07-24 19:07:01.496723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.053 [2024-07-24 19:07:01.496857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.053 [2024-07-24 19:07:01.496888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.053 [2024-07-24 19:07:01.496905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.053 [2024-07-24 19:07:01.496918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.053 [2024-07-24 19:07:01.496948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.053 qpair failed and we were unable to recover it. 00:24:24.053 [2024-07-24 19:07:01.506753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.053 [2024-07-24 19:07:01.506924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.053 [2024-07-24 19:07:01.506950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.053 [2024-07-24 19:07:01.506965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.053 [2024-07-24 19:07:01.506978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.053 [2024-07-24 19:07:01.507008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.053 qpair failed and we were unable to recover it. 00:24:24.053 [2024-07-24 19:07:01.516816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.054 [2024-07-24 19:07:01.516971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.054 [2024-07-24 19:07:01.516999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.054 [2024-07-24 19:07:01.517017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.054 [2024-07-24 19:07:01.517031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.054 [2024-07-24 19:07:01.517061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.054 qpair failed and we were unable to recover it. 00:24:24.054 [2024-07-24 19:07:01.526849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.054 [2024-07-24 19:07:01.527023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.054 [2024-07-24 19:07:01.527049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.054 [2024-07-24 19:07:01.527064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.054 [2024-07-24 19:07:01.527077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.054 [2024-07-24 19:07:01.527117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.054 qpair failed and we were unable to recover it. 00:24:24.054 [2024-07-24 19:07:01.536840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.054 [2024-07-24 19:07:01.536987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.054 [2024-07-24 19:07:01.537014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.054 [2024-07-24 19:07:01.537029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.054 [2024-07-24 19:07:01.537042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.054 [2024-07-24 19:07:01.537077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.054 qpair failed and we were unable to recover it. 00:24:24.054 [2024-07-24 19:07:01.546896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.054 [2024-07-24 19:07:01.547022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.054 [2024-07-24 19:07:01.547048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.054 [2024-07-24 19:07:01.547063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.054 [2024-07-24 19:07:01.547076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.054 [2024-07-24 19:07:01.547113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.054 qpair failed and we were unable to recover it. 00:24:24.054 [2024-07-24 19:07:01.556888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.054 [2024-07-24 19:07:01.557011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.054 [2024-07-24 19:07:01.557037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.054 [2024-07-24 19:07:01.557051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.054 [2024-07-24 19:07:01.557064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.054 [2024-07-24 19:07:01.557095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.054 qpair failed and we were unable to recover it. 00:24:24.054 [2024-07-24 19:07:01.566927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.054 [2024-07-24 19:07:01.567056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.054 [2024-07-24 19:07:01.567082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.054 [2024-07-24 19:07:01.567097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.054 [2024-07-24 19:07:01.567119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.054 [2024-07-24 19:07:01.567151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.054 qpair failed and we were unable to recover it. 00:24:24.054 [2024-07-24 19:07:01.576951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.054 [2024-07-24 19:07:01.577074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.054 [2024-07-24 19:07:01.577109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.054 [2024-07-24 19:07:01.577127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.054 [2024-07-24 19:07:01.577141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.054 [2024-07-24 19:07:01.577172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.054 qpair failed and we were unable to recover it. 00:24:24.054 [2024-07-24 19:07:01.586977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.054 [2024-07-24 19:07:01.587107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.054 [2024-07-24 19:07:01.587139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.054 [2024-07-24 19:07:01.587155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.054 [2024-07-24 19:07:01.587168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.054 [2024-07-24 19:07:01.587198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.054 qpair failed and we were unable to recover it. 00:24:24.054 [2024-07-24 19:07:01.597003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.054 [2024-07-24 19:07:01.597135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.054 [2024-07-24 19:07:01.597161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.054 [2024-07-24 19:07:01.597176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.054 [2024-07-24 19:07:01.597189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.054 [2024-07-24 19:07:01.597218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.054 qpair failed and we were unable to recover it. 00:24:24.054 [2024-07-24 19:07:01.607044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.054 [2024-07-24 19:07:01.607170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.054 [2024-07-24 19:07:01.607196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.054 [2024-07-24 19:07:01.607211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.054 [2024-07-24 19:07:01.607224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.054 [2024-07-24 19:07:01.607255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.054 qpair failed and we were unable to recover it. 00:24:24.054 [2024-07-24 19:07:01.617047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.054 [2024-07-24 19:07:01.617179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.054 [2024-07-24 19:07:01.617205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.054 [2024-07-24 19:07:01.617220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.054 [2024-07-24 19:07:01.617233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.054 [2024-07-24 19:07:01.617263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.054 qpair failed and we were unable to recover it. 00:24:24.054 [2024-07-24 19:07:01.627093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.054 [2024-07-24 19:07:01.627231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.054 [2024-07-24 19:07:01.627257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.054 [2024-07-24 19:07:01.627272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.054 [2024-07-24 19:07:01.627288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.054 [2024-07-24 19:07:01.627324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.054 qpair failed and we were unable to recover it. 00:24:24.054 [2024-07-24 19:07:01.637123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.054 [2024-07-24 19:07:01.637254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.054 [2024-07-24 19:07:01.637280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.054 [2024-07-24 19:07:01.637295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.054 [2024-07-24 19:07:01.637308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.055 [2024-07-24 19:07:01.637338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.055 qpair failed and we were unable to recover it. 00:24:24.055 [2024-07-24 19:07:01.647179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.055 [2024-07-24 19:07:01.647305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.055 [2024-07-24 19:07:01.647331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.055 [2024-07-24 19:07:01.647346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.055 [2024-07-24 19:07:01.647359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.055 [2024-07-24 19:07:01.647388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.055 qpair failed and we were unable to recover it. 00:24:24.313 [2024-07-24 19:07:01.657183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.313 [2024-07-24 19:07:01.657314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.313 [2024-07-24 19:07:01.657340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.313 [2024-07-24 19:07:01.657355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.313 [2024-07-24 19:07:01.657368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.313 [2024-07-24 19:07:01.657397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.313 qpair failed and we were unable to recover it. 00:24:24.313 [2024-07-24 19:07:01.667242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.313 [2024-07-24 19:07:01.667397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.313 [2024-07-24 19:07:01.667422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.313 [2024-07-24 19:07:01.667437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.313 [2024-07-24 19:07:01.667450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.313 [2024-07-24 19:07:01.667479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.313 qpair failed and we were unable to recover it. 00:24:24.313 [2024-07-24 19:07:01.677237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.313 [2024-07-24 19:07:01.677376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.313 [2024-07-24 19:07:01.677402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.313 [2024-07-24 19:07:01.677417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.313 [2024-07-24 19:07:01.677430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.313 [2024-07-24 19:07:01.677460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.313 qpair failed and we were unable to recover it. 00:24:24.313 [2024-07-24 19:07:01.687265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.313 [2024-07-24 19:07:01.687397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.313 [2024-07-24 19:07:01.687424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.313 [2024-07-24 19:07:01.687439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.313 [2024-07-24 19:07:01.687452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.313 [2024-07-24 19:07:01.687481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.313 qpair failed and we were unable to recover it. 00:24:24.313 [2024-07-24 19:07:01.697297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.313 [2024-07-24 19:07:01.697435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.313 [2024-07-24 19:07:01.697462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.313 [2024-07-24 19:07:01.697477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.313 [2024-07-24 19:07:01.697490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.313 [2024-07-24 19:07:01.697522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.313 qpair failed and we were unable to recover it. 00:24:24.313 [2024-07-24 19:07:01.707360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.313 [2024-07-24 19:07:01.707507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.314 [2024-07-24 19:07:01.707532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.314 [2024-07-24 19:07:01.707547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.314 [2024-07-24 19:07:01.707560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.314 [2024-07-24 19:07:01.707590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.314 qpair failed and we were unable to recover it. 00:24:24.314 [2024-07-24 19:07:01.717335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.314 [2024-07-24 19:07:01.717484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.314 [2024-07-24 19:07:01.717510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.314 [2024-07-24 19:07:01.717524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.314 [2024-07-24 19:07:01.717545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.314 [2024-07-24 19:07:01.717576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.314 qpair failed and we were unable to recover it. 00:24:24.314 [2024-07-24 19:07:01.727368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.314 [2024-07-24 19:07:01.727494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.314 [2024-07-24 19:07:01.727520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.314 [2024-07-24 19:07:01.727534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.314 [2024-07-24 19:07:01.727547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.314 [2024-07-24 19:07:01.727577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.314 qpair failed and we were unable to recover it. 00:24:24.314 [2024-07-24 19:07:01.737395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.314 [2024-07-24 19:07:01.737524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.314 [2024-07-24 19:07:01.737549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.314 [2024-07-24 19:07:01.737564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.314 [2024-07-24 19:07:01.737577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.314 [2024-07-24 19:07:01.737606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.314 qpair failed and we were unable to recover it. 00:24:24.314 [2024-07-24 19:07:01.747442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.314 [2024-07-24 19:07:01.747574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.314 [2024-07-24 19:07:01.747600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.314 [2024-07-24 19:07:01.747614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.314 [2024-07-24 19:07:01.747627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.314 [2024-07-24 19:07:01.747657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.314 qpair failed and we were unable to recover it. 00:24:24.314 [2024-07-24 19:07:01.757485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.314 [2024-07-24 19:07:01.757623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.314 [2024-07-24 19:07:01.757649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.314 [2024-07-24 19:07:01.757664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.314 [2024-07-24 19:07:01.757677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.314 [2024-07-24 19:07:01.757719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.314 qpair failed and we were unable to recover it. 00:24:24.314 [2024-07-24 19:07:01.767487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.314 [2024-07-24 19:07:01.767622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.314 [2024-07-24 19:07:01.767649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.314 [2024-07-24 19:07:01.767663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.314 [2024-07-24 19:07:01.767676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.314 [2024-07-24 19:07:01.767706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.314 qpair failed and we were unable to recover it. 00:24:24.314 [2024-07-24 19:07:01.777597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.314 [2024-07-24 19:07:01.777724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.314 [2024-07-24 19:07:01.777750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.314 [2024-07-24 19:07:01.777765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.314 [2024-07-24 19:07:01.777778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.314 [2024-07-24 19:07:01.777808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.314 qpair failed and we were unable to recover it. 00:24:24.314 [2024-07-24 19:07:01.787586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.314 [2024-07-24 19:07:01.787709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.314 [2024-07-24 19:07:01.787735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.314 [2024-07-24 19:07:01.787749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.314 [2024-07-24 19:07:01.787763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.314 [2024-07-24 19:07:01.787792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.314 qpair failed and we were unable to recover it. 00:24:24.314 [2024-07-24 19:07:01.797592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.314 [2024-07-24 19:07:01.797712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.314 [2024-07-24 19:07:01.797738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.314 [2024-07-24 19:07:01.797752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.314 [2024-07-24 19:07:01.797766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.314 [2024-07-24 19:07:01.797796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.314 qpair failed and we were unable to recover it. 00:24:24.314 [2024-07-24 19:07:01.807634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.314 [2024-07-24 19:07:01.807763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.314 [2024-07-24 19:07:01.807788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.314 [2024-07-24 19:07:01.807809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.314 [2024-07-24 19:07:01.807824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.314 [2024-07-24 19:07:01.807855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.314 qpair failed and we were unable to recover it. 00:24:24.314 [2024-07-24 19:07:01.817631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.314 [2024-07-24 19:07:01.817759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.314 [2024-07-24 19:07:01.817784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.314 [2024-07-24 19:07:01.817799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.314 [2024-07-24 19:07:01.817812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.314 [2024-07-24 19:07:01.817842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.314 qpair failed and we were unable to recover it. 00:24:24.314 [2024-07-24 19:07:01.827663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.314 [2024-07-24 19:07:01.827828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.314 [2024-07-24 19:07:01.827853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.314 [2024-07-24 19:07:01.827868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.314 [2024-07-24 19:07:01.827881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.314 [2024-07-24 19:07:01.827912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.314 qpair failed and we were unable to recover it. 00:24:24.314 [2024-07-24 19:07:01.837716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.315 [2024-07-24 19:07:01.837849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.315 [2024-07-24 19:07:01.837876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.315 [2024-07-24 19:07:01.837890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.315 [2024-07-24 19:07:01.837904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.315 [2024-07-24 19:07:01.837935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.315 qpair failed and we were unable to recover it. 00:24:24.315 [2024-07-24 19:07:01.847759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.315 [2024-07-24 19:07:01.847896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.315 [2024-07-24 19:07:01.847922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.315 [2024-07-24 19:07:01.847937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.315 [2024-07-24 19:07:01.847950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.315 [2024-07-24 19:07:01.847980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.315 qpair failed and we were unable to recover it. 00:24:24.315 [2024-07-24 19:07:01.857777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.315 [2024-07-24 19:07:01.857908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.315 [2024-07-24 19:07:01.857936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.315 [2024-07-24 19:07:01.857956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.315 [2024-07-24 19:07:01.857970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.315 [2024-07-24 19:07:01.858001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.315 qpair failed and we were unable to recover it. 00:24:24.315 [2024-07-24 19:07:01.867833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.315 [2024-07-24 19:07:01.867956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.315 [2024-07-24 19:07:01.867983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.315 [2024-07-24 19:07:01.867997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.315 [2024-07-24 19:07:01.868010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.315 [2024-07-24 19:07:01.868040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.315 qpair failed and we were unable to recover it. 00:24:24.315 [2024-07-24 19:07:01.877855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.315 [2024-07-24 19:07:01.877974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.315 [2024-07-24 19:07:01.878000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.315 [2024-07-24 19:07:01.878015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.315 [2024-07-24 19:07:01.878029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.315 [2024-07-24 19:07:01.878058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.315 qpair failed and we were unable to recover it. 00:24:24.315 [2024-07-24 19:07:01.887882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.315 [2024-07-24 19:07:01.888043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.315 [2024-07-24 19:07:01.888069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.315 [2024-07-24 19:07:01.888084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.315 [2024-07-24 19:07:01.888097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.315 [2024-07-24 19:07:01.888152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.315 qpair failed and we were unable to recover it. 00:24:24.315 [2024-07-24 19:07:01.897938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.315 [2024-07-24 19:07:01.898067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.315 [2024-07-24 19:07:01.898099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.315 [2024-07-24 19:07:01.898125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.315 [2024-07-24 19:07:01.898139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.315 [2024-07-24 19:07:01.898171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.315 qpair failed and we were unable to recover it. 00:24:24.315 [2024-07-24 19:07:01.907913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.315 [2024-07-24 19:07:01.908050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.315 [2024-07-24 19:07:01.908077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.315 [2024-07-24 19:07:01.908092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.315 [2024-07-24 19:07:01.908114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.315 [2024-07-24 19:07:01.908147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.315 qpair failed and we were unable to recover it. 00:24:24.575 [2024-07-24 19:07:01.917977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.575 [2024-07-24 19:07:01.918131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.575 [2024-07-24 19:07:01.918160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.575 [2024-07-24 19:07:01.918175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.575 [2024-07-24 19:07:01.918189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.575 [2024-07-24 19:07:01.918220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.575 qpair failed and we were unable to recover it. 00:24:24.575 [2024-07-24 19:07:01.927978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.575 [2024-07-24 19:07:01.928121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.575 [2024-07-24 19:07:01.928148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.575 [2024-07-24 19:07:01.928163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.575 [2024-07-24 19:07:01.928176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.575 [2024-07-24 19:07:01.928207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.575 qpair failed and we were unable to recover it. 00:24:24.575 [2024-07-24 19:07:01.938048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.575 [2024-07-24 19:07:01.938196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.575 [2024-07-24 19:07:01.938223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.575 [2024-07-24 19:07:01.938238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.575 [2024-07-24 19:07:01.938251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.575 [2024-07-24 19:07:01.938287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.575 qpair failed and we were unable to recover it. 00:24:24.575 [2024-07-24 19:07:01.948047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.575 [2024-07-24 19:07:01.948185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.575 [2024-07-24 19:07:01.948212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.575 [2024-07-24 19:07:01.948227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.575 [2024-07-24 19:07:01.948239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.575 [2024-07-24 19:07:01.948269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.575 qpair failed and we were unable to recover it. 00:24:24.575 [2024-07-24 19:07:01.958050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.575 [2024-07-24 19:07:01.958172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.575 [2024-07-24 19:07:01.958198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.575 [2024-07-24 19:07:01.958213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.575 [2024-07-24 19:07:01.958226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.575 [2024-07-24 19:07:01.958256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.575 qpair failed and we were unable to recover it. 00:24:24.575 [2024-07-24 19:07:01.968124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.575 [2024-07-24 19:07:01.968258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.575 [2024-07-24 19:07:01.968284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.575 [2024-07-24 19:07:01.968299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.575 [2024-07-24 19:07:01.968312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.575 [2024-07-24 19:07:01.968353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.575 qpair failed and we were unable to recover it. 00:24:24.575 [2024-07-24 19:07:01.978148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.575 [2024-07-24 19:07:01.978275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.575 [2024-07-24 19:07:01.978303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.575 [2024-07-24 19:07:01.978323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.575 [2024-07-24 19:07:01.978337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.575 [2024-07-24 19:07:01.978370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.575 qpair failed and we were unable to recover it. 00:24:24.575 [2024-07-24 19:07:01.988156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.575 [2024-07-24 19:07:01.988283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.575 [2024-07-24 19:07:01.988315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.575 [2024-07-24 19:07:01.988330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.575 [2024-07-24 19:07:01.988344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.575 [2024-07-24 19:07:01.988374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.575 qpair failed and we were unable to recover it. 00:24:24.575 [2024-07-24 19:07:01.998251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.575 [2024-07-24 19:07:01.998421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.575 [2024-07-24 19:07:01.998447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.575 [2024-07-24 19:07:01.998463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.575 [2024-07-24 19:07:01.998476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.575 [2024-07-24 19:07:01.998507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.575 qpair failed and we were unable to recover it. 00:24:24.575 [2024-07-24 19:07:02.008269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.575 [2024-07-24 19:07:02.008403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.575 [2024-07-24 19:07:02.008430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.575 [2024-07-24 19:07:02.008444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.575 [2024-07-24 19:07:02.008457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.575 [2024-07-24 19:07:02.008489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.575 qpair failed and we were unable to recover it. 00:24:24.575 [2024-07-24 19:07:02.018227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.576 [2024-07-24 19:07:02.018365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.576 [2024-07-24 19:07:02.018392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.576 [2024-07-24 19:07:02.018407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.576 [2024-07-24 19:07:02.018420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.576 [2024-07-24 19:07:02.018449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.576 qpair failed and we were unable to recover it. 00:24:24.576 [2024-07-24 19:07:02.028295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.576 [2024-07-24 19:07:02.028464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.576 [2024-07-24 19:07:02.028490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.576 [2024-07-24 19:07:02.028504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.576 [2024-07-24 19:07:02.028517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.576 [2024-07-24 19:07:02.028553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.576 qpair failed and we were unable to recover it. 00:24:24.576 [2024-07-24 19:07:02.038287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.576 [2024-07-24 19:07:02.038424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.576 [2024-07-24 19:07:02.038450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.576 [2024-07-24 19:07:02.038465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.576 [2024-07-24 19:07:02.038478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.576 [2024-07-24 19:07:02.038507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.576 qpair failed and we were unable to recover it. 00:24:24.576 [2024-07-24 19:07:02.048360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.576 [2024-07-24 19:07:02.048486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.576 [2024-07-24 19:07:02.048512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.576 [2024-07-24 19:07:02.048526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.576 [2024-07-24 19:07:02.048539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.576 [2024-07-24 19:07:02.048569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.576 qpair failed and we were unable to recover it. 00:24:24.576 [2024-07-24 19:07:02.058341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.576 [2024-07-24 19:07:02.058480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.576 [2024-07-24 19:07:02.058506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.576 [2024-07-24 19:07:02.058521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.576 [2024-07-24 19:07:02.058534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.576 [2024-07-24 19:07:02.058563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.576 qpair failed and we were unable to recover it. 00:24:24.576 [2024-07-24 19:07:02.068417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.576 [2024-07-24 19:07:02.068539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.576 [2024-07-24 19:07:02.068565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.576 [2024-07-24 19:07:02.068580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.576 [2024-07-24 19:07:02.068593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.576 [2024-07-24 19:07:02.068622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.576 qpair failed and we were unable to recover it. 00:24:24.576 [2024-07-24 19:07:02.078418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.576 [2024-07-24 19:07:02.078533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.576 [2024-07-24 19:07:02.078564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.576 [2024-07-24 19:07:02.078580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.576 [2024-07-24 19:07:02.078593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.576 [2024-07-24 19:07:02.078622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.576 qpair failed and we were unable to recover it. 00:24:24.576 [2024-07-24 19:07:02.088433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.576 [2024-07-24 19:07:02.088565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.576 [2024-07-24 19:07:02.088591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.576 [2024-07-24 19:07:02.088605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.576 [2024-07-24 19:07:02.088618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.576 [2024-07-24 19:07:02.088647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.576 qpair failed and we were unable to recover it. 00:24:24.576 [2024-07-24 19:07:02.098448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.576 [2024-07-24 19:07:02.098576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.576 [2024-07-24 19:07:02.098603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.576 [2024-07-24 19:07:02.098618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.576 [2024-07-24 19:07:02.098631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.576 [2024-07-24 19:07:02.098660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.576 qpair failed and we were unable to recover it. 00:24:24.576 [2024-07-24 19:07:02.108509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.576 [2024-07-24 19:07:02.108636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.576 [2024-07-24 19:07:02.108661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.576 [2024-07-24 19:07:02.108676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.576 [2024-07-24 19:07:02.108689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.576 [2024-07-24 19:07:02.108718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.576 qpair failed and we were unable to recover it. 00:24:24.576 [2024-07-24 19:07:02.118541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.576 [2024-07-24 19:07:02.118663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.576 [2024-07-24 19:07:02.118689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.576 [2024-07-24 19:07:02.118704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.576 [2024-07-24 19:07:02.118723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.576 [2024-07-24 19:07:02.118755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.576 qpair failed and we were unable to recover it. 00:24:24.576 [2024-07-24 19:07:02.128553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.576 [2024-07-24 19:07:02.128683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.576 [2024-07-24 19:07:02.128708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.576 [2024-07-24 19:07:02.128722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.576 [2024-07-24 19:07:02.128734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.577 [2024-07-24 19:07:02.128763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.577 qpair failed and we were unable to recover it. 00:24:24.577 [2024-07-24 19:07:02.138612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.577 [2024-07-24 19:07:02.138757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.577 [2024-07-24 19:07:02.138783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.577 [2024-07-24 19:07:02.138798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.577 [2024-07-24 19:07:02.138811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.577 [2024-07-24 19:07:02.138840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.577 qpair failed and we were unable to recover it. 00:24:24.577 [2024-07-24 19:07:02.148592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.577 [2024-07-24 19:07:02.148715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.577 [2024-07-24 19:07:02.148740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.577 [2024-07-24 19:07:02.148755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.577 [2024-07-24 19:07:02.148768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.577 [2024-07-24 19:07:02.148798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.577 qpair failed and we were unable to recover it. 00:24:24.577 [2024-07-24 19:07:02.158657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.577 [2024-07-24 19:07:02.158822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.577 [2024-07-24 19:07:02.158848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.577 [2024-07-24 19:07:02.158862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.577 [2024-07-24 19:07:02.158875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.577 [2024-07-24 19:07:02.158918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.577 qpair failed and we were unable to recover it. 00:24:24.577 [2024-07-24 19:07:02.168657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.577 [2024-07-24 19:07:02.168793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.577 [2024-07-24 19:07:02.168820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.577 [2024-07-24 19:07:02.168834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.577 [2024-07-24 19:07:02.168847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.577 [2024-07-24 19:07:02.168877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.577 qpair failed and we were unable to recover it. 00:24:24.836 [2024-07-24 19:07:02.178714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.836 [2024-07-24 19:07:02.178842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.836 [2024-07-24 19:07:02.178869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.836 [2024-07-24 19:07:02.178884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.836 [2024-07-24 19:07:02.178897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.836 [2024-07-24 19:07:02.178927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.836 qpair failed and we were unable to recover it. 00:24:24.836 [2024-07-24 19:07:02.188719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.836 [2024-07-24 19:07:02.188839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.836 [2024-07-24 19:07:02.188865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.836 [2024-07-24 19:07:02.188880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.836 [2024-07-24 19:07:02.188893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.836 [2024-07-24 19:07:02.188924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.836 qpair failed and we were unable to recover it. 00:24:24.836 [2024-07-24 19:07:02.198765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.836 [2024-07-24 19:07:02.198958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.836 [2024-07-24 19:07:02.198984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.836 [2024-07-24 19:07:02.198998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.836 [2024-07-24 19:07:02.199011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.836 [2024-07-24 19:07:02.199041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.836 qpair failed and we were unable to recover it. 00:24:24.836 [2024-07-24 19:07:02.208773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.836 [2024-07-24 19:07:02.208899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.836 [2024-07-24 19:07:02.208925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.836 [2024-07-24 19:07:02.208946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.836 [2024-07-24 19:07:02.208959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.836 [2024-07-24 19:07:02.208990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.836 qpair failed and we were unable to recover it. 00:24:24.836 [2024-07-24 19:07:02.218814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.836 [2024-07-24 19:07:02.218958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.836 [2024-07-24 19:07:02.218984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.836 [2024-07-24 19:07:02.218998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.836 [2024-07-24 19:07:02.219012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.836 [2024-07-24 19:07:02.219041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.836 qpair failed and we were unable to recover it. 00:24:24.836 [2024-07-24 19:07:02.228867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.836 [2024-07-24 19:07:02.229035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.836 [2024-07-24 19:07:02.229061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.836 [2024-07-24 19:07:02.229076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.836 [2024-07-24 19:07:02.229089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.836 [2024-07-24 19:07:02.229125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.836 qpair failed and we were unable to recover it. 00:24:24.836 [2024-07-24 19:07:02.238918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.836 [2024-07-24 19:07:02.239079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.836 [2024-07-24 19:07:02.239114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.836 [2024-07-24 19:07:02.239133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.836 [2024-07-24 19:07:02.239146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.836 [2024-07-24 19:07:02.239175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.836 qpair failed and we were unable to recover it. 00:24:24.836 [2024-07-24 19:07:02.248912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.836 [2024-07-24 19:07:02.249040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.836 [2024-07-24 19:07:02.249066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.836 [2024-07-24 19:07:02.249081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.836 [2024-07-24 19:07:02.249094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.836 [2024-07-24 19:07:02.249133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.836 qpair failed and we were unable to recover it. 00:24:24.836 [2024-07-24 19:07:02.258904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.836 [2024-07-24 19:07:02.259027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.836 [2024-07-24 19:07:02.259053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.836 [2024-07-24 19:07:02.259068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.836 [2024-07-24 19:07:02.259081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.836 [2024-07-24 19:07:02.259116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.836 qpair failed and we were unable to recover it. 00:24:24.836 [2024-07-24 19:07:02.268963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.836 [2024-07-24 19:07:02.269098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.836 [2024-07-24 19:07:02.269132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.836 [2024-07-24 19:07:02.269147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.836 [2024-07-24 19:07:02.269160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.836 [2024-07-24 19:07:02.269190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.836 qpair failed and we were unable to recover it. 00:24:24.836 [2024-07-24 19:07:02.279043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.836 [2024-07-24 19:07:02.279185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.836 [2024-07-24 19:07:02.279215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.836 [2024-07-24 19:07:02.279230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.836 [2024-07-24 19:07:02.279243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.836 [2024-07-24 19:07:02.279273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.836 qpair failed and we were unable to recover it. 00:24:24.836 [2024-07-24 19:07:02.289047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.836 [2024-07-24 19:07:02.289204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.836 [2024-07-24 19:07:02.289230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.836 [2024-07-24 19:07:02.289244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.836 [2024-07-24 19:07:02.289258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.836 [2024-07-24 19:07:02.289288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.836 qpair failed and we were unable to recover it. 00:24:24.836 [2024-07-24 19:07:02.299054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.836 [2024-07-24 19:07:02.299195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.836 [2024-07-24 19:07:02.299221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.836 [2024-07-24 19:07:02.299242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.836 [2024-07-24 19:07:02.299256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.836 [2024-07-24 19:07:02.299286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.836 qpair failed and we were unable to recover it. 00:24:24.836 [2024-07-24 19:07:02.309072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.836 [2024-07-24 19:07:02.309210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.836 [2024-07-24 19:07:02.309236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.836 [2024-07-24 19:07:02.309251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.836 [2024-07-24 19:07:02.309264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.836 [2024-07-24 19:07:02.309308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.836 qpair failed and we were unable to recover it. 00:24:24.836 [2024-07-24 19:07:02.319114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.836 [2024-07-24 19:07:02.319239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.836 [2024-07-24 19:07:02.319264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.836 [2024-07-24 19:07:02.319279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.836 [2024-07-24 19:07:02.319292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.837 [2024-07-24 19:07:02.319322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.837 qpair failed and we were unable to recover it. 00:24:24.837 [2024-07-24 19:07:02.329198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.837 [2024-07-24 19:07:02.329402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.837 [2024-07-24 19:07:02.329431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.837 [2024-07-24 19:07:02.329446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.837 [2024-07-24 19:07:02.329458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.837 [2024-07-24 19:07:02.329489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.837 qpair failed and we were unable to recover it. 00:24:24.837 [2024-07-24 19:07:02.339221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.837 [2024-07-24 19:07:02.339381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.837 [2024-07-24 19:07:02.339407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.837 [2024-07-24 19:07:02.339422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.837 [2024-07-24 19:07:02.339435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.837 [2024-07-24 19:07:02.339466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.837 qpair failed and we were unable to recover it. 00:24:24.837 [2024-07-24 19:07:02.349253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.837 [2024-07-24 19:07:02.349448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.837 [2024-07-24 19:07:02.349474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.837 [2024-07-24 19:07:02.349489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.837 [2024-07-24 19:07:02.349502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.837 [2024-07-24 19:07:02.349533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.837 qpair failed and we were unable to recover it. 00:24:24.837 [2024-07-24 19:07:02.359249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.837 [2024-07-24 19:07:02.359365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.837 [2024-07-24 19:07:02.359391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.837 [2024-07-24 19:07:02.359406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.837 [2024-07-24 19:07:02.359419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.837 [2024-07-24 19:07:02.359461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.837 qpair failed and we were unable to recover it. 00:24:24.837 [2024-07-24 19:07:02.369294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.837 [2024-07-24 19:07:02.369469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.837 [2024-07-24 19:07:02.369495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.837 [2024-07-24 19:07:02.369510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.837 [2024-07-24 19:07:02.369523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0718000b90 00:24:24.837 [2024-07-24 19:07:02.369553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:24.837 qpair failed and we were unable to recover it. 00:24:24.837 [2024-07-24 19:07:02.369597] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:24:24.837 A controller has encountered a failure and is being reset. 00:24:25.094 Controller properly reset. 00:24:28.371 Initializing NVMe Controllers 00:24:28.371 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:28.371 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:28.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:28.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:28.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:28.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:28.371 Initialization complete. Launching workers. 00:24:28.371 Starting thread on core 1 00:24:28.371 Starting thread on core 2 00:24:28.371 Starting thread on core 3 00:24:28.371 Starting thread on core 0 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:24:28.371 00:24:28.371 real 0m11.380s 00:24:28.371 user 0m27.740s 00:24:28.371 sys 0m6.448s 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:28.371 ************************************ 00:24:28.371 END TEST nvmf_target_disconnect_tc2 00:24:28.371 ************************************ 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:28.371 rmmod nvme_tcp 00:24:28.371 rmmod nvme_fabrics 00:24:28.371 rmmod nvme_keyring 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3252966 ']' 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3252966 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3252966 ']' 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 3252966 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3252966 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3252966' 00:24:28.371 killing process with pid 3252966 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 3252966 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 3252966 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.371 19:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.273 19:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:30.273 00:24:30.273 real 0m16.233s 00:24:30.273 user 0m53.495s 00:24:30.273 sys 0m8.685s 00:24:30.273 19:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:30.273 19:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:30.273 ************************************ 00:24:30.273 END TEST nvmf_target_disconnect 00:24:30.273 ************************************ 00:24:30.273 19:07:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:30.273 00:24:30.273 real 5m4.885s 00:24:30.273 user 10m52.107s 00:24:30.273 sys 1m13.739s 00:24:30.273 19:07:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:30.273 19:07:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.273 ************************************ 00:24:30.273 END TEST nvmf_host 00:24:30.273 ************************************ 00:24:30.273 00:24:30.273 real 19m36.297s 00:24:30.273 user 46m8.654s 00:24:30.273 sys 5m1.047s 00:24:30.273 19:07:07 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:30.273 19:07:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:30.273 ************************************ 00:24:30.274 END TEST nvmf_tcp 00:24:30.274 ************************************ 00:24:30.274 19:07:07 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:24:30.274 19:07:07 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:30.274 19:07:07 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:30.274 19:07:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:30.274 19:07:07 -- common/autotest_common.sh@10 -- # set +x 00:24:30.532 ************************************ 00:24:30.532 START TEST spdkcli_nvmf_tcp 00:24:30.532 ************************************ 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:30.532 * Looking for test storage... 00:24:30.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3254166 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3254166 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 3254166 ']' 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:30.532 19:07:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:30.532 [2024-07-24 19:07:07.993771] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:24:30.532 [2024-07-24 19:07:07.993859] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3254166 ] 00:24:30.532 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.532 [2024-07-24 19:07:08.049914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:30.791 [2024-07-24 19:07:08.156925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.791 [2024-07-24 19:07:08.156929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.791 19:07:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:30.791 19:07:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:24:30.791 19:07:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:30.791 19:07:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:30.791 19:07:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:30.791 19:07:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:30.791 19:07:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:30.791 19:07:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:30.791 19:07:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:30.791 19:07:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:30.791 19:07:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:30.791 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:30.791 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:30.791 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:30.791 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:30.791 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:30.791 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:30.791 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:30.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:30.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:30.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:30.791 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:30.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:30.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:30.791 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:30.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:30.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:30.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:30.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:30.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:30.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:30.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:30.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:30.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:30.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:30.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:30.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:30.791 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:30.791 ' 00:24:33.318 [2024-07-24 19:07:10.835196] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.689 [2024-07-24 19:07:12.055418] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:24:37.213 [2024-07-24 19:07:14.314476] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:24:39.109 [2024-07-24 19:07:16.300787] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:24:40.479 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:24:40.479 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:24:40.479 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:24:40.479 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:24:40.479 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:24:40.479 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:24:40.479 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:24:40.479 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:40.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:24:40.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:24:40.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:40.479 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:40.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:24:40.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:40.479 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:40.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:24:40.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:40.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:40.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:40.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:40.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:24:40.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:24:40.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:40.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:24:40.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:40.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:24:40.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:24:40.479 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:24:40.479 19:07:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:24:40.479 19:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:40.479 19:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:40.479 19:07:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:24:40.479 19:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:40.479 19:07:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:40.479 19:07:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:24:40.479 19:07:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:24:40.736 19:07:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:24:40.993 19:07:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:24:40.993 19:07:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:24:40.993 19:07:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:40.993 19:07:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:40.993 19:07:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:24:40.993 19:07:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:40.993 19:07:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:40.993 19:07:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:24:40.993 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:24:40.993 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:40.993 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:24:40.993 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:24:40.993 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:24:40.993 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:24:40.993 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:40.993 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:24:40.993 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:24:40.993 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:24:40.993 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:24:40.993 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:24:40.994 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:24:40.994 ' 00:24:46.286 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:24:46.286 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:24:46.286 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:46.286 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:24:46.286 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:24:46.286 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:24:46.287 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:24:46.287 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:46.287 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:24:46.287 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:24:46.287 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:24:46.287 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:24:46.287 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:24:46.287 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:24:46.287 19:07:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:24:46.287 19:07:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:46.287 19:07:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:46.287 19:07:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3254166 00:24:46.287 19:07:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3254166 ']' 00:24:46.287 19:07:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3254166 00:24:46.287 19:07:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:24:46.287 19:07:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:46.287 19:07:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3254166 00:24:46.287 19:07:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:46.287 19:07:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:46.287 19:07:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3254166' 00:24:46.287 killing process with pid 3254166 00:24:46.287 19:07:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 3254166 00:24:46.287 19:07:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 3254166 00:24:46.545 19:07:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:24:46.545 19:07:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:24:46.545 19:07:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3254166 ']' 00:24:46.545 19:07:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3254166 00:24:46.545 19:07:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3254166 ']' 00:24:46.545 19:07:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3254166 00:24:46.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3254166) - No such process 00:24:46.545 19:07:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 3254166 is not found' 00:24:46.545 Process with pid 3254166 is not found 00:24:46.545 19:07:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:24:46.545 19:07:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:24:46.545 19:07:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:24:46.545 00:24:46.545 real 0m16.043s 00:24:46.545 user 0m33.869s 00:24:46.545 sys 0m0.829s 00:24:46.545 19:07:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:46.545 19:07:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:46.545 ************************************ 00:24:46.545 END TEST spdkcli_nvmf_tcp 00:24:46.545 ************************************ 00:24:46.545 19:07:23 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:24:46.545 19:07:23 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:46.545 19:07:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:46.545 19:07:23 -- common/autotest_common.sh@10 -- # set +x 00:24:46.545 ************************************ 00:24:46.545 START TEST nvmf_identify_passthru 00:24:46.545 ************************************ 00:24:46.545 19:07:23 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:24:46.545 * Looking for test storage... 00:24:46.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:46.545 19:07:24 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:46.545 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:24:46.545 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:46.545 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:46.545 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:46.545 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:46.545 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:46.545 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:46.545 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:46.545 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:46.545 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:46.545 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:46.545 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:46.545 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:46.545 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:46.545 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:46.545 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:46.545 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:46.545 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:46.545 19:07:24 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.545 19:07:24 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.545 19:07:24 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.546 19:07:24 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.546 19:07:24 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.546 19:07:24 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.546 19:07:24 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:24:46.546 19:07:24 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.546 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:24:46.546 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:46.546 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:46.546 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:46.546 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:46.546 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:46.546 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:46.546 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:46.546 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:46.546 19:07:24 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:46.546 19:07:24 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.546 19:07:24 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.546 19:07:24 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.546 19:07:24 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.546 19:07:24 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.546 19:07:24 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.546 19:07:24 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:24:46.546 19:07:24 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.546 19:07:24 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:24:46.546 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:46.546 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:46.546 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:46.546 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:46.546 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:46.546 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.546 19:07:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:46.546 19:07:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.546 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:46.546 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:46.546 19:07:24 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:24:46.546 19:07:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:48.448 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:48.448 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:48.448 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:48.449 Found net devices under 0000:09:00.0: cvl_0_0 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:48.449 Found net devices under 0000:09:00.1: cvl_0_1 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:48.449 19:07:25 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:48.449 19:07:26 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:48.708 19:07:26 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:48.708 19:07:26 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:48.708 19:07:26 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:48.708 19:07:26 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:48.708 19:07:26 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:48.708 19:07:26 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:48.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:48.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:24:48.708 00:24:48.708 --- 10.0.0.2 ping statistics --- 00:24:48.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.708 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:24:48.708 19:07:26 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:48.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:48.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:24:48.708 00:24:48.708 --- 10.0.0.1 ping statistics --- 00:24:48.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.708 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:24:48.708 19:07:26 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:48.708 19:07:26 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:24:48.708 19:07:26 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:48.708 19:07:26 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:48.708 19:07:26 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:48.708 19:07:26 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:48.708 19:07:26 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:48.708 19:07:26 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:48.708 19:07:26 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:48.708 19:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:24:48.708 19:07:26 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:48.708 19:07:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:48.708 19:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:24:48.708 19:07:26 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:24:48.708 19:07:26 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:24:48.708 19:07:26 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:24:48.708 19:07:26 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:24:48.708 19:07:26 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:24:48.708 19:07:26 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:24:48.708 19:07:26 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:48.708 19:07:26 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:48.708 19:07:26 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:24:48.708 19:07:26 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:24:48.708 19:07:26 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:24:48.708 19:07:26 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:0b:00.0 00:24:48.708 19:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:24:48.708 19:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:24:48.708 19:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:24:48.708 19:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:24:48.708 19:07:26 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:24:48.708 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.893 19:07:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:24:52.893 19:07:30 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:24:52.893 19:07:30 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:24:52.893 19:07:30 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:24:52.893 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.078 19:07:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:24:57.078 19:07:34 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:24:57.078 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:57.078 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:57.078 19:07:34 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:24:57.078 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:57.078 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:57.078 19:07:34 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3258711 00:24:57.078 19:07:34 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:57.078 19:07:34 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:57.078 19:07:34 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3258711 00:24:57.078 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 3258711 ']' 00:24:57.078 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.078 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:57.078 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.078 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:57.078 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:57.078 [2024-07-24 19:07:34.596983] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:24:57.078 [2024-07-24 19:07:34.597067] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.078 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.078 [2024-07-24 19:07:34.663433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:57.336 [2024-07-24 19:07:34.771214] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.336 [2024-07-24 19:07:34.771261] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.336 [2024-07-24 19:07:34.771289] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.336 [2024-07-24 19:07:34.771300] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.336 [2024-07-24 19:07:34.771309] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.336 [2024-07-24 19:07:34.771390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.336 [2024-07-24 19:07:34.771456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.336 [2024-07-24 19:07:34.771524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.336 [2024-07-24 19:07:34.771521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:57.336 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:57.336 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:24:57.336 19:07:34 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:24:57.336 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.336 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:57.336 INFO: Log level set to 20 00:24:57.336 INFO: Requests: 00:24:57.336 { 00:24:57.336 "jsonrpc": "2.0", 00:24:57.336 "method": "nvmf_set_config", 00:24:57.336 "id": 1, 00:24:57.336 "params": { 00:24:57.336 "admin_cmd_passthru": { 00:24:57.336 "identify_ctrlr": true 00:24:57.336 } 00:24:57.336 } 00:24:57.336 } 00:24:57.336 00:24:57.336 INFO: response: 00:24:57.336 { 00:24:57.336 "jsonrpc": "2.0", 00:24:57.336 "id": 1, 00:24:57.336 "result": true 00:24:57.336 } 00:24:57.336 00:24:57.336 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.336 19:07:34 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:24:57.337 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.337 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:57.337 INFO: Setting log level to 20 00:24:57.337 INFO: Setting log level to 20 00:24:57.337 INFO: Log level set to 20 00:24:57.337 INFO: Log level set to 20 00:24:57.337 INFO: Requests: 00:24:57.337 { 00:24:57.337 "jsonrpc": "2.0", 00:24:57.337 "method": "framework_start_init", 00:24:57.337 "id": 1 00:24:57.337 } 00:24:57.337 00:24:57.337 INFO: Requests: 00:24:57.337 { 00:24:57.337 "jsonrpc": "2.0", 00:24:57.337 "method": "framework_start_init", 00:24:57.337 "id": 1 00:24:57.337 } 00:24:57.337 00:24:57.337 [2024-07-24 19:07:34.916488] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:24:57.337 INFO: response: 00:24:57.337 { 00:24:57.337 "jsonrpc": "2.0", 00:24:57.337 "id": 1, 00:24:57.337 "result": true 00:24:57.337 } 00:24:57.337 00:24:57.337 INFO: response: 00:24:57.337 { 00:24:57.337 "jsonrpc": "2.0", 00:24:57.337 "id": 1, 00:24:57.337 "result": true 00:24:57.337 } 00:24:57.337 00:24:57.337 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.337 19:07:34 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:57.337 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.337 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:57.337 INFO: Setting log level to 40 00:24:57.337 INFO: Setting log level to 40 00:24:57.337 INFO: Setting log level to 40 00:24:57.337 [2024-07-24 19:07:34.926629] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.337 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.337 19:07:34 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:24:57.337 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:57.337 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:57.594 19:07:34 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:24:57.594 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.594 19:07:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:00.872 Nvme0n1 00:25:00.872 19:07:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.872 19:07:37 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:00.872 19:07:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.872 19:07:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:00.872 19:07:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.872 19:07:37 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:00.872 19:07:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.872 19:07:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:00.872 19:07:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.872 19:07:37 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.872 19:07:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.872 19:07:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:00.872 [2024-07-24 19:07:37.829151] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.872 19:07:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.872 19:07:37 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:00.872 19:07:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.872 19:07:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:00.872 [ 00:25:00.872 { 00:25:00.872 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:00.872 "subtype": "Discovery", 00:25:00.872 "listen_addresses": [], 00:25:00.872 "allow_any_host": true, 00:25:00.872 "hosts": [] 00:25:00.872 }, 00:25:00.872 { 00:25:00.872 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.872 "subtype": "NVMe", 00:25:00.872 "listen_addresses": [ 00:25:00.872 { 00:25:00.872 "trtype": "TCP", 00:25:00.872 "adrfam": "IPv4", 00:25:00.872 "traddr": "10.0.0.2", 00:25:00.872 "trsvcid": "4420" 00:25:00.872 } 00:25:00.872 ], 00:25:00.872 "allow_any_host": true, 00:25:00.872 "hosts": [], 00:25:00.872 "serial_number": "SPDK00000000000001", 00:25:00.872 "model_number": "SPDK bdev Controller", 00:25:00.872 "max_namespaces": 1, 00:25:00.872 "min_cntlid": 1, 00:25:00.872 "max_cntlid": 65519, 00:25:00.872 "namespaces": [ 00:25:00.872 { 00:25:00.872 "nsid": 1, 00:25:00.872 "bdev_name": "Nvme0n1", 00:25:00.872 "name": "Nvme0n1", 00:25:00.872 "nguid": "A02DD654AF224BD8B583D3C87162A003", 00:25:00.872 "uuid": "a02dd654-af22-4bd8-b583-d3c87162a003" 00:25:00.872 } 00:25:00.872 ] 00:25:00.872 } 00:25:00.872 ] 00:25:00.872 19:07:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.872 19:07:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:00.872 19:07:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:00.872 19:07:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:00.872 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.872 19:07:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:25:00.872 19:07:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:00.872 19:07:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:00.872 19:07:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:00.872 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.872 19:07:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:00.872 19:07:38 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:25:00.873 19:07:38 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:00.873 19:07:38 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:00.873 19:07:38 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.873 19:07:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:00.873 19:07:38 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.873 19:07:38 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:00.873 19:07:38 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:00.873 19:07:38 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:00.873 19:07:38 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:00.873 19:07:38 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:00.873 19:07:38 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:00.873 19:07:38 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:00.873 19:07:38 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:00.873 rmmod nvme_tcp 00:25:00.873 rmmod nvme_fabrics 00:25:00.873 rmmod nvme_keyring 00:25:00.873 19:07:38 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:00.873 19:07:38 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:00.873 19:07:38 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:00.873 19:07:38 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3258711 ']' 00:25:00.873 19:07:38 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3258711 00:25:00.873 19:07:38 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 3258711 ']' 00:25:00.873 19:07:38 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 3258711 00:25:00.873 19:07:38 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:25:00.873 19:07:38 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:00.873 19:07:38 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3258711 00:25:00.873 19:07:38 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:00.873 19:07:38 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:00.873 19:07:38 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3258711' 00:25:00.873 killing process with pid 3258711 00:25:00.873 19:07:38 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 3258711 00:25:00.873 19:07:38 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 3258711 00:25:02.771 19:07:40 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:02.771 19:07:40 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:02.771 19:07:40 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:02.771 19:07:40 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:02.771 19:07:40 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:02.771 19:07:40 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.771 19:07:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:02.771 19:07:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.673 19:07:42 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:04.673 00:25:04.673 real 0m18.087s 00:25:04.673 user 0m27.111s 00:25:04.673 sys 0m2.369s 00:25:04.673 19:07:42 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:04.673 19:07:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:04.673 ************************************ 00:25:04.673 END TEST nvmf_identify_passthru 00:25:04.673 ************************************ 00:25:04.673 19:07:42 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:04.673 19:07:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:04.673 19:07:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:04.673 19:07:42 -- common/autotest_common.sh@10 -- # set +x 00:25:04.673 ************************************ 00:25:04.673 START TEST nvmf_dif 00:25:04.673 ************************************ 00:25:04.673 19:07:42 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:04.673 * Looking for test storage... 00:25:04.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:04.673 19:07:42 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:04.673 19:07:42 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:04.673 19:07:42 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:04.673 19:07:42 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:04.673 19:07:42 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.673 19:07:42 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.673 19:07:42 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.673 19:07:42 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:04.673 19:07:42 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:04.673 19:07:42 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:04.673 19:07:42 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:04.673 19:07:42 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:04.673 19:07:42 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:04.673 19:07:42 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.673 19:07:42 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:04.673 19:07:42 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:04.673 19:07:42 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:04.673 19:07:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:06.584 19:07:44 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:06.585 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:06.585 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:06.585 Found net devices under 0000:09:00.0: cvl_0_0 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:06.585 Found net devices under 0000:09:00.1: cvl_0_1 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:06.585 19:07:44 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:06.841 19:07:44 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:06.841 19:07:44 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:06.841 19:07:44 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:06.841 19:07:44 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:06.841 19:07:44 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:06.841 19:07:44 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:06.841 19:07:44 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:06.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:06.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:25:06.841 00:25:06.841 --- 10.0.0.2 ping statistics --- 00:25:06.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.841 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:25:06.841 19:07:44 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:06.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:06.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:25:06.841 00:25:06.841 --- 10.0.0.1 ping statistics --- 00:25:06.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.841 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:25:06.841 19:07:44 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:06.841 19:07:44 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:06.841 19:07:44 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:06.841 19:07:44 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:07.773 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:07.773 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:07.773 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:07.773 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:07.773 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:07.773 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:07.773 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:07.773 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:07.773 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:07.773 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:07.773 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:07.773 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:07.773 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:07.773 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:07.773 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:07.773 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:07.773 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:08.032 19:07:45 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.032 19:07:45 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:08.032 19:07:45 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:08.032 19:07:45 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.032 19:07:45 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:08.032 19:07:45 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:08.032 19:07:45 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:08.032 19:07:45 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:08.032 19:07:45 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:08.032 19:07:45 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:08.032 19:07:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:08.032 19:07:45 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3261932 00:25:08.032 19:07:45 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:08.032 19:07:45 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3261932 00:25:08.032 19:07:45 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 3261932 ']' 00:25:08.032 19:07:45 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.032 19:07:45 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:08.032 19:07:45 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.032 19:07:45 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:08.032 19:07:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:08.032 [2024-07-24 19:07:45.574388] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:25:08.032 [2024-07-24 19:07:45.574478] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.032 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.290 [2024-07-24 19:07:45.643629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.290 [2024-07-24 19:07:45.761889] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.290 [2024-07-24 19:07:45.761940] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.290 [2024-07-24 19:07:45.761957] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.290 [2024-07-24 19:07:45.761971] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.290 [2024-07-24 19:07:45.761982] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.290 [2024-07-24 19:07:45.762020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.290 19:07:45 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:08.290 19:07:45 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:25:08.290 19:07:45 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:08.290 19:07:45 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:08.290 19:07:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:08.549 19:07:45 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.549 19:07:45 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:08.549 19:07:45 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:08.549 19:07:45 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.549 19:07:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:08.549 [2024-07-24 19:07:45.916141] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.549 19:07:45 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.549 19:07:45 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:08.549 19:07:45 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:08.549 19:07:45 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:08.549 19:07:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:08.549 ************************************ 00:25:08.549 START TEST fio_dif_1_default 00:25:08.549 ************************************ 00:25:08.549 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:25:08.549 19:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:08.549 19:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:08.549 19:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:08.549 19:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:08.549 19:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:08.549 19:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:08.549 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:08.550 bdev_null0 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:08.550 [2024-07-24 19:07:45.976442] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:08.550 { 00:25:08.550 "params": { 00:25:08.550 "name": "Nvme$subsystem", 00:25:08.550 "trtype": "$TEST_TRANSPORT", 00:25:08.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.550 "adrfam": "ipv4", 00:25:08.550 "trsvcid": "$NVMF_PORT", 00:25:08.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.550 "hdgst": ${hdgst:-false}, 00:25:08.550 "ddgst": ${ddgst:-false} 00:25:08.550 }, 00:25:08.550 "method": "bdev_nvme_attach_controller" 00:25:08.550 } 00:25:08.550 EOF 00:25:08.550 )") 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:25:08.550 19:07:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:08.550 "params": { 00:25:08.550 "name": "Nvme0", 00:25:08.550 "trtype": "tcp", 00:25:08.550 "traddr": "10.0.0.2", 00:25:08.550 "adrfam": "ipv4", 00:25:08.550 "trsvcid": "4420", 00:25:08.550 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:08.550 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:08.550 "hdgst": false, 00:25:08.550 "ddgst": false 00:25:08.550 }, 00:25:08.550 "method": "bdev_nvme_attach_controller" 00:25:08.550 }' 00:25:08.550 19:07:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:08.550 19:07:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:08.550 19:07:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:08.550 19:07:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:08.550 19:07:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:08.550 19:07:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:08.550 19:07:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:08.550 19:07:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:08.550 19:07:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:08.550 19:07:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:08.808 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:08.808 fio-3.35 00:25:08.808 Starting 1 thread 00:25:08.808 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.057 00:25:21.057 filename0: (groupid=0, jobs=1): err= 0: pid=3262166: Wed Jul 24 19:07:56 2024 00:25:21.057 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10013msec) 00:25:21.057 slat (nsec): min=4601, max=35567, avg=9743.74, stdev=2692.84 00:25:21.057 clat (usec): min=40883, max=47383, avg=41004.06, stdev=417.37 00:25:21.057 lat (usec): min=40891, max=47402, avg=41013.81, stdev=417.61 00:25:21.057 clat percentiles (usec): 00:25:21.057 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:25:21.057 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:25:21.057 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:21.057 | 99.00th=[41681], 99.50th=[41681], 99.90th=[47449], 99.95th=[47449], 00:25:21.057 | 99.99th=[47449] 00:25:21.057 bw ( KiB/s): min= 384, max= 416, per=99.51%, avg=388.80, stdev=11.72, samples=20 00:25:21.057 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:25:21.057 lat (msec) : 50=100.00% 00:25:21.057 cpu : usr=89.45%, sys=10.30%, ctx=14, majf=0, minf=247 00:25:21.057 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:21.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.057 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.057 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:21.057 00:25:21.057 Run status group 0 (all jobs): 00:25:21.057 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10013-10013msec 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.057 00:25:21.057 real 0m11.173s 00:25:21.057 user 0m10.243s 00:25:21.057 sys 0m1.292s 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:21.057 ************************************ 00:25:21.057 END TEST fio_dif_1_default 00:25:21.057 ************************************ 00:25:21.057 19:07:57 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:21.057 19:07:57 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:21.057 19:07:57 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:21.057 19:07:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:21.057 ************************************ 00:25:21.057 START TEST fio_dif_1_multi_subsystems 00:25:21.057 ************************************ 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:21.057 bdev_null0 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:21.057 [2024-07-24 19:07:57.203098] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:25:21.057 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:21.058 bdev_null1 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:21.058 { 00:25:21.058 "params": { 00:25:21.058 "name": "Nvme$subsystem", 00:25:21.058 "trtype": "$TEST_TRANSPORT", 00:25:21.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.058 "adrfam": "ipv4", 00:25:21.058 "trsvcid": "$NVMF_PORT", 00:25:21.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.058 "hdgst": ${hdgst:-false}, 00:25:21.058 "ddgst": ${ddgst:-false} 00:25:21.058 }, 00:25:21.058 "method": "bdev_nvme_attach_controller" 00:25:21.058 } 00:25:21.058 EOF 00:25:21.058 )") 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:21.058 { 00:25:21.058 "params": { 00:25:21.058 "name": "Nvme$subsystem", 00:25:21.058 "trtype": "$TEST_TRANSPORT", 00:25:21.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.058 "adrfam": "ipv4", 00:25:21.058 "trsvcid": "$NVMF_PORT", 00:25:21.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.058 "hdgst": ${hdgst:-false}, 00:25:21.058 "ddgst": ${ddgst:-false} 00:25:21.058 }, 00:25:21.058 "method": "bdev_nvme_attach_controller" 00:25:21.058 } 00:25:21.058 EOF 00:25:21.058 )") 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:21.058 "params": { 00:25:21.058 "name": "Nvme0", 00:25:21.058 "trtype": "tcp", 00:25:21.058 "traddr": "10.0.0.2", 00:25:21.058 "adrfam": "ipv4", 00:25:21.058 "trsvcid": "4420", 00:25:21.058 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:21.058 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:21.058 "hdgst": false, 00:25:21.058 "ddgst": false 00:25:21.058 }, 00:25:21.058 "method": "bdev_nvme_attach_controller" 00:25:21.058 },{ 00:25:21.058 "params": { 00:25:21.058 "name": "Nvme1", 00:25:21.058 "trtype": "tcp", 00:25:21.058 "traddr": "10.0.0.2", 00:25:21.058 "adrfam": "ipv4", 00:25:21.058 "trsvcid": "4420", 00:25:21.058 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.058 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:21.058 "hdgst": false, 00:25:21.058 "ddgst": false 00:25:21.058 }, 00:25:21.058 "method": "bdev_nvme_attach_controller" 00:25:21.058 }' 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:21.058 19:07:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:21.058 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:21.058 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:21.058 fio-3.35 00:25:21.058 Starting 2 threads 00:25:21.058 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.019 00:25:31.019 filename0: (groupid=0, jobs=1): err= 0: pid=3263573: Wed Jul 24 19:08:08 2024 00:25:31.019 read: IOPS=190, BW=762KiB/s (780kB/s)(7616KiB/10001msec) 00:25:31.019 slat (nsec): min=8064, max=30659, avg=9607.86, stdev=2288.15 00:25:31.019 clat (usec): min=777, max=43251, avg=20979.06, stdev=20104.23 00:25:31.019 lat (usec): min=786, max=43266, avg=20988.67, stdev=20104.11 00:25:31.019 clat percentiles (usec): 00:25:31.019 | 1.00th=[ 816], 5.00th=[ 832], 10.00th=[ 848], 20.00th=[ 873], 00:25:31.019 | 30.00th=[ 898], 40.00th=[ 906], 50.00th=[ 1012], 60.00th=[41157], 00:25:31.019 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:31.019 | 99.00th=[41157], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:25:31.019 | 99.99th=[43254] 00:25:31.019 bw ( KiB/s): min= 702, max= 768, per=50.02%, avg=761.16, stdev=20.50, samples=19 00:25:31.019 iops : min= 175, max= 192, avg=190.26, stdev= 5.21, samples=19 00:25:31.019 lat (usec) : 1000=49.68% 00:25:31.019 lat (msec) : 2=0.32%, 50=50.00% 00:25:31.019 cpu : usr=94.22%, sys=5.51%, ctx=13, majf=0, minf=179 00:25:31.019 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:31.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.019 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.019 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:31.019 filename1: (groupid=0, jobs=1): err= 0: pid=3263574: Wed Jul 24 19:08:08 2024 00:25:31.019 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10002msec) 00:25:31.019 slat (nsec): min=5556, max=19821, avg=9531.62, stdev=2182.69 00:25:31.019 clat (usec): min=748, max=43262, avg=21025.75, stdev=20109.33 00:25:31.019 lat (usec): min=756, max=43277, avg=21035.29, stdev=20109.20 00:25:31.019 clat percentiles (usec): 00:25:31.019 | 1.00th=[ 783], 5.00th=[ 824], 10.00th=[ 840], 20.00th=[ 873], 00:25:31.019 | 30.00th=[ 889], 40.00th=[ 906], 50.00th=[41157], 60.00th=[41157], 00:25:31.019 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:31.019 | 99.00th=[41157], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:25:31.019 | 99.99th=[43254] 00:25:31.019 bw ( KiB/s): min= 702, max= 768, per=50.02%, avg=761.16, stdev=20.50, samples=19 00:25:31.019 iops : min= 175, max= 192, avg=190.26, stdev= 5.21, samples=19 00:25:31.019 lat (usec) : 750=0.05%, 1000=49.00% 00:25:31.019 lat (msec) : 2=0.84%, 50=50.11% 00:25:31.019 cpu : usr=94.43%, sys=5.29%, ctx=12, majf=0, minf=31 00:25:31.019 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:31.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.019 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.019 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:31.019 00:25:31.019 Run status group 0 (all jobs): 00:25:31.019 READ: bw=1521KiB/s (1558kB/s), 760KiB/s-762KiB/s (778kB/s-780kB/s), io=14.9MiB (15.6MB), run=10001-10002msec 00:25:31.019 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:31.019 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:25:31.019 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:31.019 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:31.019 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:25:31.019 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:31.019 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.019 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:31.019 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.019 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:31.019 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.019 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:31.019 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.019 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:31.019 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:31.019 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:25:31.019 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:31.020 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.020 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:31.020 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.020 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:31.020 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.020 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:31.020 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.020 00:25:31.020 real 0m11.288s 00:25:31.020 user 0m20.201s 00:25:31.020 sys 0m1.369s 00:25:31.020 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:31.020 19:08:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:31.020 ************************************ 00:25:31.020 END TEST fio_dif_1_multi_subsystems 00:25:31.020 ************************************ 00:25:31.020 19:08:08 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:31.020 19:08:08 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:31.020 19:08:08 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:31.020 19:08:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:31.020 ************************************ 00:25:31.020 START TEST fio_dif_rand_params 00:25:31.020 ************************************ 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:31.020 bdev_null0 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:31.020 [2024-07-24 19:08:08.541937] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:31.020 { 00:25:31.020 "params": { 00:25:31.020 "name": "Nvme$subsystem", 00:25:31.020 "trtype": "$TEST_TRANSPORT", 00:25:31.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.020 "adrfam": "ipv4", 00:25:31.020 "trsvcid": "$NVMF_PORT", 00:25:31.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.020 "hdgst": ${hdgst:-false}, 00:25:31.020 "ddgst": ${ddgst:-false} 00:25:31.020 }, 00:25:31.020 "method": "bdev_nvme_attach_controller" 00:25:31.020 } 00:25:31.020 EOF 00:25:31.020 )") 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:31.020 "params": { 00:25:31.020 "name": "Nvme0", 00:25:31.020 "trtype": "tcp", 00:25:31.020 "traddr": "10.0.0.2", 00:25:31.020 "adrfam": "ipv4", 00:25:31.020 "trsvcid": "4420", 00:25:31.020 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:31.020 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:31.020 "hdgst": false, 00:25:31.020 "ddgst": false 00:25:31.020 }, 00:25:31.020 "method": "bdev_nvme_attach_controller" 00:25:31.020 }' 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:31.020 19:08:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:31.277 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:31.277 ... 00:25:31.277 fio-3.35 00:25:31.277 Starting 3 threads 00:25:31.277 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.837 00:25:37.837 filename0: (groupid=0, jobs=1): err= 0: pid=3264969: Wed Jul 24 19:08:14 2024 00:25:37.837 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(124MiB/5040msec) 00:25:37.837 slat (nsec): min=4464, max=32642, avg=13449.45, stdev=1938.68 00:25:37.837 clat (usec): min=5087, max=94183, avg=15219.75, stdev=14111.64 00:25:37.837 lat (usec): min=5100, max=94197, avg=15233.20, stdev=14111.55 00:25:37.837 clat percentiles (usec): 00:25:37.837 | 1.00th=[ 5473], 5.00th=[ 5735], 10.00th=[ 5997], 20.00th=[ 7504], 00:25:37.837 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[10683], 60.00th=[11994], 00:25:37.837 | 70.00th=[13042], 80.00th=[14222], 90.00th=[49021], 95.00th=[52167], 00:25:37.837 | 99.00th=[56361], 99.50th=[56886], 99.90th=[93848], 99.95th=[93848], 00:25:37.837 | 99.99th=[93848] 00:25:37.837 bw ( KiB/s): min=13312, max=32512, per=31.60%, avg=25318.40, stdev=7059.96, samples=10 00:25:37.837 iops : min= 104, max= 254, avg=197.80, stdev=55.16, samples=10 00:25:37.837 lat (msec) : 10=43.75%, 20=44.25%, 50=3.33%, 100=8.67% 00:25:37.837 cpu : usr=92.84%, sys=6.71%, ctx=8, majf=0, minf=64 00:25:37.837 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:37.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.837 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.837 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:37.837 filename0: (groupid=0, jobs=1): err= 0: pid=3264970: Wed Jul 24 19:08:14 2024 00:25:37.837 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(133MiB/5024msec) 00:25:37.837 slat (nsec): min=4981, max=89877, avg=15341.87, stdev=4976.70 00:25:37.837 clat (usec): min=5244, max=88895, avg=14181.07, stdev=12948.57 00:25:37.837 lat (usec): min=5257, max=88910, avg=14196.41, stdev=12948.68 00:25:37.837 clat percentiles (usec): 00:25:37.837 | 1.00th=[ 5735], 5.00th=[ 6063], 10.00th=[ 6915], 20.00th=[ 8160], 00:25:37.837 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[10159], 60.00th=[11469], 00:25:37.837 | 70.00th=[12256], 80.00th=[13042], 90.00th=[15795], 95.00th=[51119], 00:25:37.837 | 99.00th=[54789], 99.50th=[55313], 99.90th=[88605], 99.95th=[88605], 00:25:37.837 | 99.99th=[88605] 00:25:37.837 bw ( KiB/s): min=18432, max=35072, per=33.81%, avg=27089.60, stdev=6300.36, samples=10 00:25:37.837 iops : min= 144, max= 274, avg=211.60, stdev=49.24, samples=10 00:25:37.837 lat (msec) : 10=48.54%, 20=41.56%, 50=3.58%, 100=6.31% 00:25:37.837 cpu : usr=90.74%, sys=7.57%, ctx=198, majf=0, minf=104 00:25:37.837 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:37.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.837 issued rwts: total=1061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.837 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:37.837 filename0: (groupid=0, jobs=1): err= 0: pid=3264971: Wed Jul 24 19:08:14 2024 00:25:37.837 read: IOPS=218, BW=27.3MiB/s (28.7MB/s)(138MiB/5037msec) 00:25:37.837 slat (nsec): min=4521, max=29107, avg=13481.02, stdev=1958.77 00:25:37.837 clat (usec): min=5230, max=92303, avg=13691.54, stdev=12331.31 00:25:37.837 lat (usec): min=5242, max=92316, avg=13705.02, stdev=12331.24 00:25:37.837 clat percentiles (usec): 00:25:37.837 | 1.00th=[ 5604], 5.00th=[ 5997], 10.00th=[ 6587], 20.00th=[ 7963], 00:25:37.837 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[10159], 60.00th=[11207], 00:25:37.837 | 70.00th=[11994], 80.00th=[12911], 90.00th=[14877], 95.00th=[49546], 00:25:37.837 | 99.00th=[53216], 99.50th=[53740], 99.90th=[89654], 99.95th=[92799], 00:25:37.837 | 99.99th=[92799] 00:25:37.837 bw ( KiB/s): min=20992, max=34816, per=35.12%, avg=28140.50, stdev=4848.08, samples=10 00:25:37.837 iops : min= 164, max= 272, avg=219.80, stdev=37.85, samples=10 00:25:37.837 lat (msec) : 10=48.91%, 20=41.65%, 50=4.81%, 100=4.63% 00:25:37.837 cpu : usr=92.85%, sys=6.65%, ctx=40, majf=0, minf=122 00:25:37.837 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:37.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.837 issued rwts: total=1102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.837 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:37.837 00:25:37.837 Run status group 0 (all jobs): 00:25:37.837 READ: bw=78.2MiB/s (82.1MB/s), 24.6MiB/s-27.3MiB/s (25.8MB/s-28.7MB/s), io=394MiB (414MB), run=5024-5040msec 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:37.837 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:37.838 bdev_null0 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:37.838 [2024-07-24 19:08:14.732805] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:37.838 bdev_null1 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:37.838 bdev_null2 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:37.838 { 00:25:37.838 "params": { 00:25:37.838 "name": "Nvme$subsystem", 00:25:37.838 "trtype": "$TEST_TRANSPORT", 00:25:37.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.838 "adrfam": "ipv4", 00:25:37.838 "trsvcid": "$NVMF_PORT", 00:25:37.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.838 "hdgst": ${hdgst:-false}, 00:25:37.838 "ddgst": ${ddgst:-false} 00:25:37.838 }, 00:25:37.838 "method": "bdev_nvme_attach_controller" 00:25:37.838 } 00:25:37.838 EOF 00:25:37.838 )") 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:37.838 { 00:25:37.838 "params": { 00:25:37.838 "name": "Nvme$subsystem", 00:25:37.838 "trtype": "$TEST_TRANSPORT", 00:25:37.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.838 "adrfam": "ipv4", 00:25:37.838 "trsvcid": "$NVMF_PORT", 00:25:37.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.838 "hdgst": ${hdgst:-false}, 00:25:37.838 "ddgst": ${ddgst:-false} 00:25:37.838 }, 00:25:37.838 "method": "bdev_nvme_attach_controller" 00:25:37.838 } 00:25:37.838 EOF 00:25:37.838 )") 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:37.838 19:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:37.838 { 00:25:37.838 "params": { 00:25:37.838 "name": "Nvme$subsystem", 00:25:37.838 "trtype": "$TEST_TRANSPORT", 00:25:37.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.838 "adrfam": "ipv4", 00:25:37.839 "trsvcid": "$NVMF_PORT", 00:25:37.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.839 "hdgst": ${hdgst:-false}, 00:25:37.839 "ddgst": ${ddgst:-false} 00:25:37.839 }, 00:25:37.839 "method": "bdev_nvme_attach_controller" 00:25:37.839 } 00:25:37.839 EOF 00:25:37.839 )") 00:25:37.839 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:37.839 19:08:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:37.839 19:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:37.839 19:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:37.839 19:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:37.839 19:08:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:37.839 "params": { 00:25:37.839 "name": "Nvme0", 00:25:37.839 "trtype": "tcp", 00:25:37.839 "traddr": "10.0.0.2", 00:25:37.839 "adrfam": "ipv4", 00:25:37.839 "trsvcid": "4420", 00:25:37.839 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:37.839 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:37.839 "hdgst": false, 00:25:37.839 "ddgst": false 00:25:37.839 }, 00:25:37.839 "method": "bdev_nvme_attach_controller" 00:25:37.839 },{ 00:25:37.839 "params": { 00:25:37.839 "name": "Nvme1", 00:25:37.839 "trtype": "tcp", 00:25:37.839 "traddr": "10.0.0.2", 00:25:37.839 "adrfam": "ipv4", 00:25:37.839 "trsvcid": "4420", 00:25:37.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:37.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:37.839 "hdgst": false, 00:25:37.839 "ddgst": false 00:25:37.839 }, 00:25:37.839 "method": "bdev_nvme_attach_controller" 00:25:37.839 },{ 00:25:37.839 "params": { 00:25:37.839 "name": "Nvme2", 00:25:37.839 "trtype": "tcp", 00:25:37.839 "traddr": "10.0.0.2", 00:25:37.839 "adrfam": "ipv4", 00:25:37.839 "trsvcid": "4420", 00:25:37.839 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:37.839 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:37.839 "hdgst": false, 00:25:37.839 "ddgst": false 00:25:37.839 }, 00:25:37.839 "method": "bdev_nvme_attach_controller" 00:25:37.839 }' 00:25:37.839 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:37.839 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:37.839 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:37.839 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:37.839 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:37.839 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:37.839 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:37.839 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:37.839 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:37.839 19:08:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:37.839 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:37.839 ... 00:25:37.839 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:37.839 ... 00:25:37.839 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:37.839 ... 00:25:37.839 fio-3.35 00:25:37.839 Starting 24 threads 00:25:37.839 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.039 00:25:50.039 filename0: (groupid=0, jobs=1): err= 0: pid=3265832: Wed Jul 24 19:08:26 2024 00:25:50.039 read: IOPS=413, BW=1654KiB/s (1693kB/s)(16.2MiB/10023msec) 00:25:50.039 slat (usec): min=6, max=145, avg=73.33, stdev=15.91 00:25:50.039 clat (msec): min=25, max=197, avg=38.05, stdev=25.29 00:25:50.039 lat (msec): min=25, max=197, avg=38.12, stdev=25.29 00:25:50.039 clat percentiles (msec): 00:25:50.039 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:25:50.039 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.039 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:25:50.039 | 99.00th=[ 192], 99.50th=[ 197], 99.90th=[ 199], 99.95th=[ 199], 00:25:50.039 | 99.99th=[ 199] 00:25:50.039 bw ( KiB/s): min= 256, max= 1920, per=4.18%, avg=1651.20, stdev=550.78, samples=20 00:25:50.039 iops : min= 64, max= 480, avg=412.80, stdev=137.70, samples=20 00:25:50.039 lat (msec) : 50=96.14%, 100=1.16%, 250=2.70% 00:25:50.039 cpu : usr=91.51%, sys=4.07%, ctx=100, majf=0, minf=44 00:25:50.039 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:50.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.039 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.039 issued rwts: total=4144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.039 filename0: (groupid=0, jobs=1): err= 0: pid=3265833: Wed Jul 24 19:08:26 2024 00:25:50.039 read: IOPS=412, BW=1649KiB/s (1689kB/s)(16.1MiB/10012msec) 00:25:50.039 slat (nsec): min=8075, max=63326, avg=18671.61, stdev=10457.65 00:25:50.039 clat (msec): min=20, max=269, avg=38.64, stdev=26.88 00:25:50.039 lat (msec): min=20, max=269, avg=38.66, stdev=26.88 00:25:50.039 clat percentiles (msec): 00:25:50.039 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:25:50.039 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.039 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:25:50.039 | 99.00th=[ 201], 99.50th=[ 203], 99.90th=[ 205], 99.95th=[ 205], 00:25:50.039 | 99.99th=[ 271] 00:25:50.039 bw ( KiB/s): min= 256, max= 1920, per=4.16%, avg=1644.80, stdev=565.27, samples=20 00:25:50.039 iops : min= 64, max= 480, avg=411.20, stdev=141.32, samples=20 00:25:50.039 lat (msec) : 50=96.90%, 250=3.05%, 500=0.05% 00:25:50.039 cpu : usr=94.96%, sys=2.89%, ctx=463, majf=0, minf=48 00:25:50.039 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:50.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.039 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.039 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.040 filename0: (groupid=0, jobs=1): err= 0: pid=3265834: Wed Jul 24 19:08:26 2024 00:25:50.040 read: IOPS=413, BW=1654KiB/s (1694kB/s)(16.2MiB/10020msec) 00:25:50.040 slat (nsec): min=5033, max=90367, avg=35507.12, stdev=10096.32 00:25:50.040 clat (msec): min=24, max=197, avg=38.36, stdev=25.27 00:25:50.040 lat (msec): min=24, max=198, avg=38.39, stdev=25.27 00:25:50.040 clat percentiles (msec): 00:25:50.040 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:25:50.040 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.040 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:25:50.040 | 99.00th=[ 192], 99.50th=[ 197], 99.90th=[ 199], 99.95th=[ 199], 00:25:50.040 | 99.99th=[ 199] 00:25:50.040 bw ( KiB/s): min= 256, max= 1920, per=4.18%, avg=1651.20, stdev=550.78, samples=20 00:25:50.040 iops : min= 64, max= 480, avg=412.80, stdev=137.70, samples=20 00:25:50.040 lat (msec) : 50=96.14%, 100=1.16%, 250=2.70% 00:25:50.040 cpu : usr=97.87%, sys=1.62%, ctx=59, majf=0, minf=28 00:25:50.040 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:50.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.040 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.040 issued rwts: total=4144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.040 filename0: (groupid=0, jobs=1): err= 0: pid=3265835: Wed Jul 24 19:08:26 2024 00:25:50.040 read: IOPS=412, BW=1650KiB/s (1689kB/s)(16.1MiB/10010msec) 00:25:50.040 slat (usec): min=8, max=106, avg=31.50, stdev=21.01 00:25:50.040 clat (msec): min=13, max=449, avg=38.50, stdev=31.68 00:25:50.040 lat (msec): min=13, max=450, avg=38.53, stdev=31.69 00:25:50.040 clat percentiles (msec): 00:25:50.040 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:25:50.040 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.040 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:25:50.040 | 99.00th=[ 192], 99.50th=[ 205], 99.90th=[ 384], 99.95th=[ 384], 00:25:50.040 | 99.99th=[ 451] 00:25:50.040 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1630.32, stdev=597.12, samples=19 00:25:50.040 iops : min= 32, max= 480, avg=407.58, stdev=149.28, samples=19 00:25:50.040 lat (msec) : 20=0.65%, 50=96.63%, 100=0.05%, 250=2.28%, 500=0.39% 00:25:50.040 cpu : usr=97.71%, sys=1.66%, ctx=109, majf=0, minf=23 00:25:50.040 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:25:50.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.040 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.040 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.040 filename0: (groupid=0, jobs=1): err= 0: pid=3265836: Wed Jul 24 19:08:26 2024 00:25:50.040 read: IOPS=411, BW=1644KiB/s (1684kB/s)(16.1MiB/10002msec) 00:25:50.040 slat (nsec): min=9752, max=78106, avg=33008.80, stdev=10120.25 00:25:50.040 clat (msec): min=24, max=328, avg=38.60, stdev=29.22 00:25:50.040 lat (msec): min=24, max=328, avg=38.64, stdev=29.22 00:25:50.040 clat percentiles (msec): 00:25:50.040 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:25:50.040 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.040 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:25:50.040 | 99.00th=[ 194], 99.50th=[ 199], 99.90th=[ 330], 99.95th=[ 330], 00:25:50.040 | 99.99th=[ 330] 00:25:50.040 bw ( KiB/s): min= 256, max= 1920, per=4.13%, avg=1630.32, stdev=578.51, samples=19 00:25:50.040 iops : min= 64, max= 480, avg=407.58, stdev=144.63, samples=19 00:25:50.040 lat (msec) : 50=96.89%, 100=0.39%, 250=2.33%, 500=0.39% 00:25:50.040 cpu : usr=97.84%, sys=1.63%, ctx=34, majf=0, minf=26 00:25:50.040 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:50.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.040 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.040 issued rwts: total=4112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.040 filename0: (groupid=0, jobs=1): err= 0: pid=3265837: Wed Jul 24 19:08:26 2024 00:25:50.040 read: IOPS=411, BW=1644KiB/s (1684kB/s)(16.1MiB/10002msec) 00:25:50.040 slat (usec): min=13, max=133, avg=70.91, stdev=19.05 00:25:50.040 clat (msec): min=24, max=270, avg=38.27, stdev=28.34 00:25:50.040 lat (msec): min=24, max=270, avg=38.35, stdev=28.34 00:25:50.040 clat percentiles (msec): 00:25:50.040 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:25:50.040 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.040 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:25:50.040 | 99.00th=[ 197], 99.50th=[ 199], 99.90th=[ 271], 99.95th=[ 271], 00:25:50.040 | 99.99th=[ 271] 00:25:50.040 bw ( KiB/s): min= 240, max= 1920, per=4.13%, avg=1630.32, stdev=580.52, samples=19 00:25:50.040 iops : min= 60, max= 480, avg=407.58, stdev=145.13, samples=19 00:25:50.040 lat (msec) : 50=96.94%, 100=0.34%, 250=2.29%, 500=0.44% 00:25:50.040 cpu : usr=90.77%, sys=4.31%, ctx=135, majf=0, minf=31 00:25:50.040 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:25:50.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.040 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.040 issued rwts: total=4112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.040 filename0: (groupid=0, jobs=1): err= 0: pid=3265838: Wed Jul 24 19:08:26 2024 00:25:50.040 read: IOPS=412, BW=1649KiB/s (1689kB/s)(16.1MiB/10013msec) 00:25:50.040 slat (usec): min=11, max=113, avg=64.96, stdev=21.82 00:25:50.040 clat (msec): min=14, max=316, avg=38.23, stdev=29.38 00:25:50.040 lat (msec): min=14, max=316, avg=38.29, stdev=29.38 00:25:50.040 clat percentiles (msec): 00:25:50.040 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:25:50.040 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.040 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:25:50.040 | 99.00th=[ 194], 99.50th=[ 197], 99.90th=[ 317], 99.95th=[ 317], 00:25:50.040 | 99.99th=[ 317] 00:25:50.040 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1630.32, stdev=580.08, samples=19 00:25:50.040 iops : min= 32, max= 480, avg=407.58, stdev=145.02, samples=19 00:25:50.040 lat (msec) : 20=0.39%, 50=96.90%, 250=2.33%, 500=0.39% 00:25:50.040 cpu : usr=94.54%, sys=2.79%, ctx=202, majf=0, minf=25 00:25:50.040 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:50.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.040 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.040 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.040 filename0: (groupid=0, jobs=1): err= 0: pid=3265839: Wed Jul 24 19:08:26 2024 00:25:50.040 read: IOPS=412, BW=1649KiB/s (1689kB/s)(16.1MiB/10011msec) 00:25:50.040 slat (usec): min=8, max=110, avg=27.46, stdev=13.68 00:25:50.040 clat (msec): min=14, max=314, avg=38.60, stdev=29.40 00:25:50.040 lat (msec): min=14, max=314, avg=38.63, stdev=29.41 00:25:50.040 clat percentiles (msec): 00:25:50.040 | 1.00th=[ 28], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:25:50.040 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.040 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:25:50.040 | 99.00th=[ 194], 99.50th=[ 197], 99.90th=[ 313], 99.95th=[ 313], 00:25:50.040 | 99.99th=[ 313] 00:25:50.040 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1631.16, stdev=579.29, samples=19 00:25:50.040 iops : min= 32, max= 480, avg=407.79, stdev=144.82, samples=19 00:25:50.040 lat (msec) : 20=0.44%, 50=96.61%, 100=0.24%, 250=2.28%, 500=0.44% 00:25:50.040 cpu : usr=94.93%, sys=2.95%, ctx=247, majf=0, minf=27 00:25:50.040 IO depths : 1=0.2%, 2=5.5%, 4=21.1%, 8=60.1%, 16=13.2%, 32=0.0%, >=64=0.0% 00:25:50.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.040 complete : 0=0.0%, 4=93.6%, 8=1.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.040 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.040 filename1: (groupid=0, jobs=1): err= 0: pid=3265840: Wed Jul 24 19:08:26 2024 00:25:50.040 read: IOPS=412, BW=1649KiB/s (1689kB/s)(16.1MiB/10013msec) 00:25:50.040 slat (nsec): min=8233, max=94138, avg=29027.26, stdev=15586.55 00:25:50.040 clat (msec): min=14, max=316, avg=38.53, stdev=29.36 00:25:50.040 lat (msec): min=14, max=316, avg=38.56, stdev=29.36 00:25:50.040 clat percentiles (msec): 00:25:50.040 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:25:50.040 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.040 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:25:50.040 | 99.00th=[ 194], 99.50th=[ 197], 99.90th=[ 317], 99.95th=[ 317], 00:25:50.040 | 99.99th=[ 317] 00:25:50.040 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1630.32, stdev=580.08, samples=19 00:25:50.040 iops : min= 32, max= 480, avg=407.58, stdev=145.02, samples=19 00:25:50.040 lat (msec) : 20=0.39%, 50=96.90%, 250=2.33%, 500=0.39% 00:25:50.040 cpu : usr=97.79%, sys=1.53%, ctx=38, majf=0, minf=23 00:25:50.040 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:50.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.040 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.040 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.040 filename1: (groupid=0, jobs=1): err= 0: pid=3265841: Wed Jul 24 19:08:26 2024 00:25:50.040 read: IOPS=412, BW=1651KiB/s (1690kB/s)(16.1MiB/10003msec) 00:25:50.040 slat (usec): min=9, max=102, avg=39.24, stdev=17.15 00:25:50.040 clat (msec): min=25, max=247, avg=38.41, stdev=25.59 00:25:50.040 lat (msec): min=25, max=247, avg=38.45, stdev=25.59 00:25:50.040 clat percentiles (msec): 00:25:50.040 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:25:50.040 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.040 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:25:50.040 | 99.00th=[ 192], 99.50th=[ 194], 99.90th=[ 199], 99.95th=[ 245], 00:25:50.040 | 99.99th=[ 249] 00:25:50.041 bw ( KiB/s): min= 272, max= 1920, per=4.14%, avg=1637.05, stdev=564.00, samples=19 00:25:50.041 iops : min= 68, max= 480, avg=409.26, stdev=141.00, samples=19 00:25:50.041 lat (msec) : 50=96.51%, 100=0.39%, 250=3.10% 00:25:50.041 cpu : usr=97.68%, sys=1.72%, ctx=107, majf=0, minf=46 00:25:50.041 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:50.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.041 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.041 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.041 filename1: (groupid=0, jobs=1): err= 0: pid=3265842: Wed Jul 24 19:08:26 2024 00:25:50.041 read: IOPS=411, BW=1644KiB/s (1684kB/s)(16.1MiB/10002msec) 00:25:50.041 slat (usec): min=10, max=125, avg=39.17, stdev=16.61 00:25:50.041 clat (msec): min=25, max=328, avg=38.57, stdev=28.38 00:25:50.041 lat (msec): min=25, max=328, avg=38.61, stdev=28.38 00:25:50.041 clat percentiles (msec): 00:25:50.041 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:25:50.041 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.041 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:25:50.041 | 99.00th=[ 197], 99.50th=[ 199], 99.90th=[ 271], 99.95th=[ 271], 00:25:50.041 | 99.99th=[ 330] 00:25:50.041 bw ( KiB/s): min= 240, max= 1920, per=4.13%, avg=1630.32, stdev=580.52, samples=19 00:25:50.041 iops : min= 60, max= 480, avg=407.58, stdev=145.13, samples=19 00:25:50.041 lat (msec) : 50=96.94%, 100=0.34%, 250=2.33%, 500=0.39% 00:25:50.041 cpu : usr=92.90%, sys=3.50%, ctx=126, majf=0, minf=33 00:25:50.041 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:25:50.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.041 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.041 issued rwts: total=4112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.041 filename1: (groupid=0, jobs=1): err= 0: pid=3265843: Wed Jul 24 19:08:26 2024 00:25:50.041 read: IOPS=413, BW=1654KiB/s (1694kB/s)(16.2MiB/10020msec) 00:25:50.041 slat (usec): min=8, max=177, avg=30.93, stdev=13.74 00:25:50.041 clat (msec): min=25, max=253, avg=38.44, stdev=25.38 00:25:50.041 lat (msec): min=25, max=253, avg=38.47, stdev=25.38 00:25:50.041 clat percentiles (msec): 00:25:50.041 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:25:50.041 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.041 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:25:50.041 | 99.00th=[ 194], 99.50th=[ 197], 99.90th=[ 199], 99.95th=[ 199], 00:25:50.041 | 99.99th=[ 255] 00:25:50.041 bw ( KiB/s): min= 272, max= 1920, per=4.18%, avg=1651.20, stdev=550.42, samples=20 00:25:50.041 iops : min= 68, max= 480, avg=412.80, stdev=137.60, samples=20 00:25:50.041 lat (msec) : 50=96.19%, 100=1.11%, 250=2.65%, 500=0.05% 00:25:50.041 cpu : usr=97.65%, sys=1.77%, ctx=35, majf=0, minf=51 00:25:50.041 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:25:50.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.041 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.041 issued rwts: total=4144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.041 filename1: (groupid=0, jobs=1): err= 0: pid=3265844: Wed Jul 24 19:08:26 2024 00:25:50.041 read: IOPS=412, BW=1649KiB/s (1689kB/s)(16.1MiB/10013msec) 00:25:50.041 slat (nsec): min=8600, max=99957, avg=31226.39, stdev=15771.95 00:25:50.041 clat (msec): min=14, max=388, avg=38.55, stdev=29.62 00:25:50.041 lat (msec): min=14, max=388, avg=38.58, stdev=29.62 00:25:50.041 clat percentiles (msec): 00:25:50.041 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:25:50.041 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.041 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:25:50.041 | 99.00th=[ 194], 99.50th=[ 197], 99.90th=[ 317], 99.95th=[ 317], 00:25:50.041 | 99.99th=[ 388] 00:25:50.041 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1630.32, stdev=580.08, samples=19 00:25:50.041 iops : min= 32, max= 480, avg=407.58, stdev=145.02, samples=19 00:25:50.041 lat (msec) : 20=0.39%, 50=96.90%, 250=2.33%, 500=0.39% 00:25:50.041 cpu : usr=97.24%, sys=1.74%, ctx=28, majf=0, minf=42 00:25:50.041 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:25:50.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.041 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.041 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.041 filename1: (groupid=0, jobs=1): err= 0: pid=3265845: Wed Jul 24 19:08:26 2024 00:25:50.041 read: IOPS=412, BW=1650KiB/s (1689kB/s)(16.1MiB/10010msec) 00:25:50.041 slat (usec): min=9, max=111, avg=63.78, stdev=22.03 00:25:50.041 clat (msec): min=13, max=386, avg=38.22, stdev=31.39 00:25:50.041 lat (msec): min=13, max=386, avg=38.28, stdev=31.39 00:25:50.041 clat percentiles (msec): 00:25:50.041 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:25:50.041 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.041 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:25:50.041 | 99.00th=[ 192], 99.50th=[ 203], 99.90th=[ 388], 99.95th=[ 388], 00:25:50.041 | 99.99th=[ 388] 00:25:50.041 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1630.32, stdev=597.09, samples=19 00:25:50.041 iops : min= 32, max= 480, avg=407.58, stdev=149.27, samples=19 00:25:50.041 lat (msec) : 20=0.39%, 50=96.90%, 250=2.33%, 500=0.39% 00:25:50.041 cpu : usr=92.64%, sys=3.67%, ctx=108, majf=0, minf=37 00:25:50.041 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:50.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.041 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.041 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.041 filename1: (groupid=0, jobs=1): err= 0: pid=3265846: Wed Jul 24 19:08:26 2024 00:25:50.041 read: IOPS=412, BW=1649KiB/s (1689kB/s)(16.1MiB/10011msec) 00:25:50.041 slat (usec): min=8, max=122, avg=71.44, stdev=20.19 00:25:50.041 clat (msec): min=21, max=195, avg=38.17, stdev=26.24 00:25:50.041 lat (msec): min=21, max=196, avg=38.24, stdev=26.24 00:25:50.041 clat percentiles (msec): 00:25:50.041 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:25:50.041 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.041 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:25:50.041 | 99.00th=[ 194], 99.50th=[ 194], 99.90th=[ 197], 99.95th=[ 197], 00:25:50.041 | 99.99th=[ 197] 00:25:50.041 bw ( KiB/s): min= 256, max= 1920, per=4.16%, avg=1644.80, stdev=565.27, samples=20 00:25:50.041 iops : min= 64, max= 480, avg=411.20, stdev=141.32, samples=20 00:25:50.041 lat (msec) : 50=96.90%, 250=3.10% 00:25:50.041 cpu : usr=97.87%, sys=1.41%, ctx=35, majf=0, minf=36 00:25:50.041 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:50.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.041 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.041 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.041 filename1: (groupid=0, jobs=1): err= 0: pid=3265847: Wed Jul 24 19:08:26 2024 00:25:50.041 read: IOPS=413, BW=1655KiB/s (1695kB/s)(16.2MiB/10014msec) 00:25:50.041 slat (usec): min=4, max=134, avg=17.02, stdev=13.14 00:25:50.041 clat (msec): min=33, max=260, avg=38.51, stdev=25.39 00:25:50.041 lat (msec): min=33, max=261, avg=38.53, stdev=25.40 00:25:50.041 clat percentiles (msec): 00:25:50.041 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:25:50.041 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.041 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:25:50.041 | 99.00th=[ 192], 99.50th=[ 201], 99.90th=[ 249], 99.95th=[ 249], 00:25:50.041 | 99.99th=[ 262] 00:25:50.041 bw ( KiB/s): min= 256, max= 1920, per=4.18%, avg=1651.20, stdev=550.78, samples=20 00:25:50.041 iops : min= 64, max= 480, avg=412.80, stdev=137.70, samples=20 00:25:50.041 lat (msec) : 50=96.19%, 100=0.77%, 250=2.99%, 500=0.05% 00:25:50.041 cpu : usr=97.16%, sys=1.85%, ctx=125, majf=0, minf=62 00:25:50.041 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:50.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.041 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.041 issued rwts: total=4144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.041 filename2: (groupid=0, jobs=1): err= 0: pid=3265848: Wed Jul 24 19:08:26 2024 00:25:50.041 read: IOPS=412, BW=1649KiB/s (1689kB/s)(16.1MiB/10013msec) 00:25:50.041 slat (usec): min=11, max=129, avg=63.77, stdev=21.84 00:25:50.041 clat (msec): min=20, max=263, avg=38.25, stdev=27.05 00:25:50.041 lat (msec): min=20, max=263, avg=38.31, stdev=27.05 00:25:50.041 clat percentiles (msec): 00:25:50.041 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:25:50.041 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.041 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:25:50.041 | 99.00th=[ 201], 99.50th=[ 203], 99.90th=[ 247], 99.95th=[ 249], 00:25:50.041 | 99.99th=[ 264] 00:25:50.041 bw ( KiB/s): min= 256, max= 1920, per=4.16%, avg=1644.80, stdev=567.39, samples=20 00:25:50.041 iops : min= 64, max= 480, avg=411.20, stdev=141.85, samples=20 00:25:50.041 lat (msec) : 50=96.90%, 250=3.05%, 500=0.05% 00:25:50.041 cpu : usr=91.81%, sys=3.90%, ctx=144, majf=0, minf=30 00:25:50.041 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:25:50.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.041 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.041 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.042 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.042 filename2: (groupid=0, jobs=1): err= 0: pid=3265849: Wed Jul 24 19:08:26 2024 00:25:50.042 read: IOPS=413, BW=1652KiB/s (1692kB/s)(16.2MiB/10031msec) 00:25:50.042 slat (usec): min=6, max=116, avg=57.11, stdev=30.71 00:25:50.042 clat (msec): min=24, max=251, avg=38.25, stdev=25.37 00:25:50.042 lat (msec): min=24, max=251, avg=38.30, stdev=25.37 00:25:50.042 clat percentiles (msec): 00:25:50.042 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:25:50.042 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.042 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:25:50.042 | 99.00th=[ 192], 99.50th=[ 197], 99.90th=[ 199], 99.95th=[ 213], 00:25:50.042 | 99.99th=[ 251] 00:25:50.042 bw ( KiB/s): min= 256, max= 1920, per=4.18%, avg=1651.20, stdev=550.27, samples=20 00:25:50.042 iops : min= 64, max= 480, avg=412.80, stdev=137.57, samples=20 00:25:50.042 lat (msec) : 50=96.14%, 100=1.16%, 250=2.65%, 500=0.05% 00:25:50.042 cpu : usr=93.51%, sys=3.58%, ctx=227, majf=0, minf=38 00:25:50.042 IO depths : 1=2.1%, 2=8.3%, 4=25.0%, 8=54.2%, 16=10.4%, 32=0.0%, >=64=0.0% 00:25:50.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.042 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.042 issued rwts: total=4144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.042 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.042 filename2: (groupid=0, jobs=1): err= 0: pid=3265850: Wed Jul 24 19:08:26 2024 00:25:50.042 read: IOPS=411, BW=1644KiB/s (1684kB/s)(16.1MiB/10003msec) 00:25:50.042 slat (nsec): min=4009, max=59746, avg=21392.14, stdev=11085.77 00:25:50.042 clat (msec): min=20, max=393, avg=38.74, stdev=31.71 00:25:50.042 lat (msec): min=20, max=393, avg=38.76, stdev=31.71 00:25:50.042 clat percentiles (msec): 00:25:50.042 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:25:50.042 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.042 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:25:50.042 | 99.00th=[ 192], 99.50th=[ 205], 99.90th=[ 393], 99.95th=[ 393], 00:25:50.042 | 99.99th=[ 393] 00:25:50.042 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1630.32, stdev=597.09, samples=19 00:25:50.042 iops : min= 32, max= 480, avg=407.58, stdev=149.27, samples=19 00:25:50.042 lat (msec) : 50=97.28%, 250=2.33%, 500=0.39% 00:25:50.042 cpu : usr=98.00%, sys=1.45%, ctx=61, majf=0, minf=37 00:25:50.042 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:50.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.042 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.042 issued rwts: total=4112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.042 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.042 filename2: (groupid=0, jobs=1): err= 0: pid=3265851: Wed Jul 24 19:08:26 2024 00:25:50.042 read: IOPS=412, BW=1650KiB/s (1689kB/s)(16.1MiB/10009msec) 00:25:50.042 slat (usec): min=8, max=110, avg=69.03, stdev=18.32 00:25:50.042 clat (msec): min=14, max=313, avg=38.18, stdev=29.60 00:25:50.042 lat (msec): min=14, max=313, avg=38.25, stdev=29.60 00:25:50.042 clat percentiles (msec): 00:25:50.042 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:25:50.042 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.042 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:25:50.042 | 99.00th=[ 194], 99.50th=[ 271], 99.90th=[ 313], 99.95th=[ 313], 00:25:50.042 | 99.99th=[ 313] 00:25:50.042 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1630.32, stdev=580.08, samples=19 00:25:50.042 iops : min= 32, max= 480, avg=407.58, stdev=145.02, samples=19 00:25:50.042 lat (msec) : 20=0.39%, 50=96.90%, 250=2.18%, 500=0.53% 00:25:50.042 cpu : usr=97.99%, sys=1.53%, ctx=12, majf=0, minf=23 00:25:50.042 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:50.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.042 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.042 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.042 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.042 filename2: (groupid=0, jobs=1): err= 0: pid=3265852: Wed Jul 24 19:08:26 2024 00:25:50.042 read: IOPS=412, BW=1649KiB/s (1688kB/s)(16.1MiB/10010msec) 00:25:50.042 slat (usec): min=8, max=124, avg=29.43, stdev=20.65 00:25:50.042 clat (msec): min=13, max=385, avg=38.57, stdev=31.38 00:25:50.042 lat (msec): min=13, max=386, avg=38.60, stdev=31.38 00:25:50.042 clat percentiles (msec): 00:25:50.042 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:25:50.042 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.042 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:25:50.042 | 99.00th=[ 192], 99.50th=[ 205], 99.90th=[ 384], 99.95th=[ 388], 00:25:50.042 | 99.99th=[ 388] 00:25:50.042 bw ( KiB/s): min= 128, max= 1920, per=4.12%, avg=1629.47, stdev=596.17, samples=19 00:25:50.042 iops : min= 32, max= 480, avg=407.37, stdev=149.04, samples=19 00:25:50.042 lat (msec) : 20=0.51%, 50=96.78%, 250=2.33%, 500=0.39% 00:25:50.042 cpu : usr=97.56%, sys=1.76%, ctx=48, majf=0, minf=31 00:25:50.042 IO depths : 1=2.3%, 2=8.5%, 4=25.0%, 8=54.0%, 16=10.2%, 32=0.0%, >=64=0.0% 00:25:50.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.042 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.042 issued rwts: total=4126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.042 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.042 filename2: (groupid=0, jobs=1): err= 0: pid=3265853: Wed Jul 24 19:08:26 2024 00:25:50.042 read: IOPS=411, BW=1644KiB/s (1684kB/s)(16.1MiB/10002msec) 00:25:50.042 slat (usec): min=8, max=109, avg=34.58, stdev=14.33 00:25:50.042 clat (msec): min=24, max=270, avg=38.62, stdev=28.32 00:25:50.042 lat (msec): min=24, max=270, avg=38.65, stdev=28.32 00:25:50.042 clat percentiles (msec): 00:25:50.042 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:25:50.042 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.042 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:25:50.042 | 99.00th=[ 197], 99.50th=[ 199], 99.90th=[ 271], 99.95th=[ 271], 00:25:50.042 | 99.99th=[ 271] 00:25:50.042 bw ( KiB/s): min= 256, max= 1920, per=4.13%, avg=1630.32, stdev=578.34, samples=19 00:25:50.042 iops : min= 64, max= 480, avg=407.58, stdev=144.58, samples=19 00:25:50.042 lat (msec) : 50=96.94%, 100=0.34%, 250=2.33%, 500=0.39% 00:25:50.042 cpu : usr=97.49%, sys=1.74%, ctx=190, majf=0, minf=37 00:25:50.042 IO depths : 1=4.5%, 2=10.8%, 4=25.0%, 8=51.7%, 16=8.0%, 32=0.0%, >=64=0.0% 00:25:50.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.042 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.042 issued rwts: total=4112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.042 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.042 filename2: (groupid=0, jobs=1): err= 0: pid=3265854: Wed Jul 24 19:08:26 2024 00:25:50.042 read: IOPS=410, BW=1644KiB/s (1683kB/s)(16.1MiB/10006msec) 00:25:50.042 slat (usec): min=8, max=177, avg=17.41, stdev=10.87 00:25:50.042 clat (msec): min=20, max=323, avg=38.80, stdev=29.63 00:25:50.042 lat (msec): min=20, max=323, avg=38.82, stdev=29.63 00:25:50.042 clat percentiles (msec): 00:25:50.042 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:25:50.042 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.042 | 70.00th=[ 35], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:25:50.042 | 99.00th=[ 194], 99.50th=[ 197], 99.90th=[ 326], 99.95th=[ 326], 00:25:50.042 | 99.99th=[ 326] 00:25:50.042 bw ( KiB/s): min= 128, max= 1920, per=4.12%, avg=1629.47, stdev=596.17, samples=19 00:25:50.042 iops : min= 32, max= 480, avg=407.37, stdev=149.04, samples=19 00:25:50.042 lat (msec) : 50=97.28%, 250=2.33%, 500=0.39% 00:25:50.042 cpu : usr=96.79%, sys=2.11%, ctx=189, majf=0, minf=33 00:25:50.042 IO depths : 1=2.4%, 2=8.6%, 4=25.0%, 8=53.9%, 16=10.1%, 32=0.0%, >=64=0.0% 00:25:50.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.042 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.042 issued rwts: total=4112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.042 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.042 filename2: (groupid=0, jobs=1): err= 0: pid=3265855: Wed Jul 24 19:08:26 2024 00:25:50.042 read: IOPS=412, BW=1651KiB/s (1690kB/s)(16.1MiB/10002msec) 00:25:50.042 slat (nsec): min=8617, max=71336, avg=31655.81, stdev=11317.73 00:25:50.042 clat (msec): min=25, max=245, avg=38.51, stdev=25.76 00:25:50.042 lat (msec): min=25, max=245, avg=38.54, stdev=25.76 00:25:50.042 clat percentiles (msec): 00:25:50.042 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:25:50.042 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:25:50.042 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 35], 00:25:50.042 | 99.00th=[ 194], 99.50th=[ 201], 99.90th=[ 201], 99.95th=[ 239], 00:25:50.042 | 99.99th=[ 245] 00:25:50.042 bw ( KiB/s): min= 256, max= 1920, per=4.14%, avg=1637.05, stdev=564.18, samples=19 00:25:50.042 iops : min= 64, max= 480, avg=409.26, stdev=141.04, samples=19 00:25:50.042 lat (msec) : 50=96.51%, 100=0.39%, 250=3.10% 00:25:50.042 cpu : usr=97.34%, sys=2.29%, ctx=30, majf=0, minf=40 00:25:50.042 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:25:50.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.042 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.042 issued rwts: total=4128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.042 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.042 00:25:50.042 Run status group 0 (all jobs): 00:25:50.042 READ: bw=38.6MiB/s (40.4MB/s), 1644KiB/s-1655KiB/s (1683kB/s-1695kB/s), io=387MiB (406MB), run=10002-10031msec 00:25:50.042 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:25:50.042 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:50.042 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:50.042 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:50.042 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:50.042 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:50.043 bdev_null0 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:50.043 [2024-07-24 19:08:26.482718] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:50.043 bdev_null1 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:50.043 { 00:25:50.043 "params": { 00:25:50.043 "name": "Nvme$subsystem", 00:25:50.043 "trtype": "$TEST_TRANSPORT", 00:25:50.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.043 "adrfam": "ipv4", 00:25:50.043 "trsvcid": "$NVMF_PORT", 00:25:50.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.043 "hdgst": ${hdgst:-false}, 00:25:50.043 "ddgst": ${ddgst:-false} 00:25:50.043 }, 00:25:50.043 "method": "bdev_nvme_attach_controller" 00:25:50.043 } 00:25:50.043 EOF 00:25:50.043 )") 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:50.043 19:08:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:50.043 { 00:25:50.043 "params": { 00:25:50.043 "name": "Nvme$subsystem", 00:25:50.043 "trtype": "$TEST_TRANSPORT", 00:25:50.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.043 "adrfam": "ipv4", 00:25:50.043 "trsvcid": "$NVMF_PORT", 00:25:50.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.043 "hdgst": ${hdgst:-false}, 00:25:50.044 "ddgst": ${ddgst:-false} 00:25:50.044 }, 00:25:50.044 "method": "bdev_nvme_attach_controller" 00:25:50.044 } 00:25:50.044 EOF 00:25:50.044 )") 00:25:50.044 19:08:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:50.044 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:50.044 19:08:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:50.044 19:08:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:50.044 19:08:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:50.044 19:08:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:50.044 "params": { 00:25:50.044 "name": "Nvme0", 00:25:50.044 "trtype": "tcp", 00:25:50.044 "traddr": "10.0.0.2", 00:25:50.044 "adrfam": "ipv4", 00:25:50.044 "trsvcid": "4420", 00:25:50.044 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:50.044 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:50.044 "hdgst": false, 00:25:50.044 "ddgst": false 00:25:50.044 }, 00:25:50.044 "method": "bdev_nvme_attach_controller" 00:25:50.044 },{ 00:25:50.044 "params": { 00:25:50.044 "name": "Nvme1", 00:25:50.044 "trtype": "tcp", 00:25:50.044 "traddr": "10.0.0.2", 00:25:50.044 "adrfam": "ipv4", 00:25:50.044 "trsvcid": "4420", 00:25:50.044 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:50.044 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:50.044 "hdgst": false, 00:25:50.044 "ddgst": false 00:25:50.044 }, 00:25:50.044 "method": "bdev_nvme_attach_controller" 00:25:50.044 }' 00:25:50.044 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:50.044 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:50.044 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:50.044 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:50.044 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:50.044 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:50.044 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:50.044 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:50.044 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:50.044 19:08:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:50.044 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:50.044 ... 00:25:50.044 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:50.044 ... 00:25:50.044 fio-3.35 00:25:50.044 Starting 4 threads 00:25:50.044 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.315 00:25:55.315 filename0: (groupid=0, jobs=1): err= 0: pid=3267244: Wed Jul 24 19:08:32 2024 00:25:55.315 read: IOPS=1787, BW=14.0MiB/s (14.6MB/s)(69.9MiB/5006msec) 00:25:55.315 slat (nsec): min=6980, max=83428, avg=15907.17, stdev=9545.36 00:25:55.315 clat (usec): min=1180, max=8143, avg=4422.68, stdev=739.49 00:25:55.315 lat (usec): min=1197, max=8169, avg=4438.59, stdev=740.03 00:25:55.315 clat percentiles (usec): 00:25:55.315 | 1.00th=[ 2737], 5.00th=[ 3294], 10.00th=[ 3589], 20.00th=[ 3982], 00:25:55.315 | 30.00th=[ 4228], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4555], 00:25:55.315 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 5014], 95.00th=[ 5866], 00:25:55.315 | 99.00th=[ 7046], 99.50th=[ 7373], 99.90th=[ 7898], 99.95th=[ 8094], 00:25:55.315 | 99.99th=[ 8160] 00:25:55.315 bw ( KiB/s): min=13648, max=15472, per=25.52%, avg=14305.60, stdev=533.14, samples=10 00:25:55.315 iops : min= 1706, max= 1934, avg=1788.20, stdev=66.64, samples=10 00:25:55.315 lat (msec) : 2=0.12%, 4=20.86%, 10=79.02% 00:25:55.315 cpu : usr=93.55%, sys=5.95%, ctx=10, majf=0, minf=105 00:25:55.315 IO depths : 1=0.1%, 2=10.0%, 4=62.7%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:55.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.315 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.315 issued rwts: total=8946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.315 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:55.315 filename0: (groupid=0, jobs=1): err= 0: pid=3267245: Wed Jul 24 19:08:32 2024 00:25:55.315 read: IOPS=1709, BW=13.4MiB/s (14.0MB/s)(66.8MiB/5001msec) 00:25:55.315 slat (nsec): min=7554, max=69575, avg=18207.61, stdev=9941.50 00:25:55.315 clat (usec): min=917, max=8316, avg=4621.16, stdev=721.01 00:25:55.315 lat (usec): min=931, max=8326, avg=4639.37, stdev=720.11 00:25:55.315 clat percentiles (usec): 00:25:55.315 | 1.00th=[ 3097], 5.00th=[ 3720], 10.00th=[ 4015], 20.00th=[ 4228], 00:25:55.315 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4555], 00:25:55.315 | 70.00th=[ 4686], 80.00th=[ 4883], 90.00th=[ 5538], 95.00th=[ 6194], 00:25:55.315 | 99.00th=[ 7111], 99.50th=[ 7308], 99.90th=[ 7701], 99.95th=[ 7832], 00:25:55.315 | 99.99th=[ 8291] 00:25:55.315 bw ( KiB/s): min=13168, max=14016, per=24.38%, avg=13665.00, stdev=274.63, samples=10 00:25:55.315 iops : min= 1646, max= 1752, avg=1708.10, stdev=34.37, samples=10 00:25:55.315 lat (usec) : 1000=0.01% 00:25:55.315 lat (msec) : 2=0.26%, 4=9.80%, 10=89.93% 00:25:55.315 cpu : usr=94.28%, sys=5.16%, ctx=11, majf=0, minf=122 00:25:55.315 IO depths : 1=0.1%, 2=7.7%, 4=64.5%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:55.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.315 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.315 issued rwts: total=8547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.315 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:55.315 filename1: (groupid=0, jobs=1): err= 0: pid=3267246: Wed Jul 24 19:08:32 2024 00:25:55.316 read: IOPS=1774, BW=13.9MiB/s (14.5MB/s)(69.4MiB/5004msec) 00:25:55.316 slat (nsec): min=6988, max=70062, avg=16664.48, stdev=9509.49 00:25:55.316 clat (usec): min=1488, max=8319, avg=4455.04, stdev=658.87 00:25:55.316 lat (usec): min=1513, max=8340, avg=4471.71, stdev=659.34 00:25:55.316 clat percentiles (usec): 00:25:55.316 | 1.00th=[ 2802], 5.00th=[ 3425], 10.00th=[ 3720], 20.00th=[ 4047], 00:25:55.316 | 30.00th=[ 4228], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4555], 00:25:55.316 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 5145], 95.00th=[ 5669], 00:25:55.316 | 99.00th=[ 6718], 99.50th=[ 7111], 99.90th=[ 7832], 99.95th=[ 7898], 00:25:55.316 | 99.99th=[ 8291] 00:25:55.316 bw ( KiB/s): min=13696, max=14893, per=25.33%, avg=14196.50, stdev=337.64, samples=10 00:25:55.316 iops : min= 1712, max= 1861, avg=1774.50, stdev=42.06, samples=10 00:25:55.316 lat (msec) : 2=0.08%, 4=18.36%, 10=81.56% 00:25:55.316 cpu : usr=93.92%, sys=5.60%, ctx=9, majf=0, minf=70 00:25:55.316 IO depths : 1=0.1%, 2=7.6%, 4=63.9%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:55.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.316 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.316 issued rwts: total=8879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.316 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:55.316 filename1: (groupid=0, jobs=1): err= 0: pid=3267247: Wed Jul 24 19:08:32 2024 00:25:55.316 read: IOPS=1740, BW=13.6MiB/s (14.3MB/s)(68.0MiB/5002msec) 00:25:55.316 slat (nsec): min=7443, max=67680, avg=17211.71, stdev=9753.16 00:25:55.316 clat (usec): min=923, max=8688, avg=4540.12, stdev=750.77 00:25:55.316 lat (usec): min=936, max=8718, avg=4557.33, stdev=750.20 00:25:55.316 clat percentiles (usec): 00:25:55.316 | 1.00th=[ 2933], 5.00th=[ 3490], 10.00th=[ 3785], 20.00th=[ 4113], 00:25:55.316 | 30.00th=[ 4293], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4555], 00:25:55.316 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 5407], 95.00th=[ 6128], 00:25:55.316 | 99.00th=[ 7046], 99.50th=[ 7308], 99.90th=[ 8291], 99.95th=[ 8356], 00:25:55.316 | 99.99th=[ 8717] 00:25:55.316 bw ( KiB/s): min=13248, max=14336, per=24.83%, avg=13916.80, stdev=341.65, samples=10 00:25:55.316 iops : min= 1656, max= 1792, avg=1739.60, stdev=42.71, samples=10 00:25:55.316 lat (usec) : 1000=0.01% 00:25:55.316 lat (msec) : 2=0.14%, 4=15.58%, 10=84.27% 00:25:55.316 cpu : usr=95.00%, sys=4.46%, ctx=14, majf=0, minf=72 00:25:55.316 IO depths : 1=0.1%, 2=9.7%, 4=62.5%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:55.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.316 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.316 issued rwts: total=8704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.316 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:55.316 00:25:55.316 Run status group 0 (all jobs): 00:25:55.316 READ: bw=54.7MiB/s (57.4MB/s), 13.4MiB/s-14.0MiB/s (14.0MB/s-14.6MB/s), io=274MiB (287MB), run=5001-5006msec 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.316 00:25:55.316 real 0m24.257s 00:25:55.316 user 4m27.637s 00:25:55.316 sys 0m8.843s 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:55.316 19:08:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:55.316 ************************************ 00:25:55.316 END TEST fio_dif_rand_params 00:25:55.316 ************************************ 00:25:55.316 19:08:32 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:25:55.316 19:08:32 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:55.316 19:08:32 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:55.316 19:08:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:55.316 ************************************ 00:25:55.316 START TEST fio_dif_digest 00:25:55.316 ************************************ 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:55.316 bdev_null0 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:55.316 [2024-07-24 19:08:32.851321] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:55.316 { 00:25:55.316 "params": { 00:25:55.316 "name": "Nvme$subsystem", 00:25:55.316 "trtype": "$TEST_TRANSPORT", 00:25:55.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:55.316 "adrfam": "ipv4", 00:25:55.316 "trsvcid": "$NVMF_PORT", 00:25:55.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:55.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:55.316 "hdgst": ${hdgst:-false}, 00:25:55.316 "ddgst": ${ddgst:-false} 00:25:55.316 }, 00:25:55.316 "method": "bdev_nvme_attach_controller" 00:25:55.316 } 00:25:55.316 EOF 00:25:55.316 )") 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:25:55.316 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:55.317 "params": { 00:25:55.317 "name": "Nvme0", 00:25:55.317 "trtype": "tcp", 00:25:55.317 "traddr": "10.0.0.2", 00:25:55.317 "adrfam": "ipv4", 00:25:55.317 "trsvcid": "4420", 00:25:55.317 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:55.317 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:55.317 "hdgst": true, 00:25:55.317 "ddgst": true 00:25:55.317 }, 00:25:55.317 "method": "bdev_nvme_attach_controller" 00:25:55.317 }' 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:55.317 19:08:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:55.575 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:55.575 ... 00:25:55.575 fio-3.35 00:25:55.575 Starting 3 threads 00:25:55.575 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.775 00:26:07.775 filename0: (groupid=0, jobs=1): err= 0: pid=3268113: Wed Jul 24 19:08:43 2024 00:26:07.775 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(261MiB/10048msec) 00:26:07.775 slat (nsec): min=5068, max=48226, avg=15328.65, stdev=5015.36 00:26:07.775 clat (usec): min=9950, max=50512, avg=14383.92, stdev=1576.76 00:26:07.775 lat (usec): min=9963, max=50525, avg=14399.25, stdev=1576.74 00:26:07.775 clat percentiles (usec): 00:26:07.775 | 1.00th=[11863], 5.00th=[12518], 10.00th=[12911], 20.00th=[13435], 00:26:07.775 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:26:07.775 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15795], 95.00th=[16188], 00:26:07.775 | 99.00th=[17171], 99.50th=[17433], 99.90th=[20579], 99.95th=[49021], 00:26:07.775 | 99.99th=[50594] 00:26:07.775 bw ( KiB/s): min=25600, max=27904, per=35.37%, avg=26713.60, stdev=533.61, samples=20 00:26:07.775 iops : min= 200, max= 218, avg=208.70, stdev= 4.17, samples=20 00:26:07.775 lat (msec) : 10=0.05%, 20=99.71%, 50=0.19%, 100=0.05% 00:26:07.775 cpu : usr=90.86%, sys=8.66%, ctx=36, majf=0, minf=187 00:26:07.775 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:07.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.775 issued rwts: total=2090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.775 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:07.775 filename0: (groupid=0, jobs=1): err= 0: pid=3268114: Wed Jul 24 19:08:43 2024 00:26:07.775 read: IOPS=197, BW=24.6MiB/s (25.8MB/s)(247MiB/10045msec) 00:26:07.775 slat (nsec): min=4639, max=53129, avg=14843.25, stdev=4340.66 00:26:07.775 clat (usec): min=10162, max=53188, avg=15186.41, stdev=1644.43 00:26:07.775 lat (usec): min=10174, max=53202, avg=15201.25, stdev=1644.32 00:26:07.775 clat percentiles (usec): 00:26:07.775 | 1.00th=[12387], 5.00th=[13304], 10.00th=[13698], 20.00th=[14222], 00:26:07.775 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15139], 60.00th=[15401], 00:26:07.775 | 70.00th=[15795], 80.00th=[16057], 90.00th=[16581], 95.00th=[17171], 00:26:07.775 | 99.00th=[17957], 99.50th=[18744], 99.90th=[49546], 99.95th=[53216], 00:26:07.775 | 99.99th=[53216] 00:26:07.775 bw ( KiB/s): min=24064, max=26368, per=33.51%, avg=25305.60, stdev=617.51, samples=20 00:26:07.775 iops : min= 188, max= 206, avg=197.70, stdev= 4.82, samples=20 00:26:07.775 lat (msec) : 20=99.85%, 50=0.10%, 100=0.05% 00:26:07.775 cpu : usr=90.70%, sys=8.82%, ctx=22, majf=0, minf=213 00:26:07.775 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:07.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.775 issued rwts: total=1979,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.775 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:07.775 filename0: (groupid=0, jobs=1): err= 0: pid=3268115: Wed Jul 24 19:08:43 2024 00:26:07.775 read: IOPS=185, BW=23.1MiB/s (24.3MB/s)(232MiB/10045msec) 00:26:07.775 slat (nsec): min=4519, max=64892, avg=15296.03, stdev=4740.80 00:26:07.775 clat (usec): min=12419, max=58159, avg=16167.02, stdev=2355.00 00:26:07.775 lat (usec): min=12432, max=58173, avg=16182.31, stdev=2354.93 00:26:07.775 clat percentiles (usec): 00:26:07.775 | 1.00th=[13435], 5.00th=[14222], 10.00th=[14615], 20.00th=[15139], 00:26:07.775 | 30.00th=[15401], 40.00th=[15795], 50.00th=[16057], 60.00th=[16319], 00:26:07.776 | 70.00th=[16712], 80.00th=[17171], 90.00th=[17695], 95.00th=[17957], 00:26:07.776 | 99.00th=[19006], 99.50th=[19530], 99.90th=[56886], 99.95th=[57934], 00:26:07.776 | 99.99th=[57934] 00:26:07.776 bw ( KiB/s): min=22528, max=24576, per=31.48%, avg=23771.90, stdev=523.84, samples=20 00:26:07.776 iops : min= 176, max= 192, avg=185.70, stdev= 4.12, samples=20 00:26:07.776 lat (msec) : 20=99.57%, 50=0.16%, 100=0.27% 00:26:07.776 cpu : usr=91.32%, sys=8.19%, ctx=25, majf=0, minf=140 00:26:07.776 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:07.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.776 issued rwts: total=1859,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.776 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:07.776 00:26:07.776 Run status group 0 (all jobs): 00:26:07.776 READ: bw=73.7MiB/s (77.3MB/s), 23.1MiB/s-26.0MiB/s (24.3MB/s-27.3MB/s), io=741MiB (777MB), run=10045-10048msec 00:26:07.776 19:08:44 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:07.776 19:08:44 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:07.776 19:08:44 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:07.776 19:08:44 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:07.776 19:08:44 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:07.776 19:08:44 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:07.776 19:08:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.776 19:08:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:07.776 19:08:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.776 19:08:44 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:07.776 19:08:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.776 19:08:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:07.776 19:08:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.776 00:26:07.776 real 0m11.229s 00:26:07.776 user 0m28.596s 00:26:07.776 sys 0m2.830s 00:26:07.776 19:08:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:07.776 19:08:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:07.776 ************************************ 00:26:07.776 END TEST fio_dif_digest 00:26:07.776 ************************************ 00:26:07.776 19:08:44 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:07.776 19:08:44 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:07.776 19:08:44 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:07.776 19:08:44 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:07.776 19:08:44 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:07.776 19:08:44 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:07.776 19:08:44 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:07.776 19:08:44 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:07.776 rmmod nvme_tcp 00:26:07.776 rmmod nvme_fabrics 00:26:07.776 rmmod nvme_keyring 00:26:07.776 19:08:44 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:07.776 19:08:44 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:07.776 19:08:44 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:07.776 19:08:44 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3261932 ']' 00:26:07.776 19:08:44 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3261932 00:26:07.776 19:08:44 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 3261932 ']' 00:26:07.776 19:08:44 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 3261932 00:26:07.776 19:08:44 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:26:07.776 19:08:44 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:07.776 19:08:44 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3261932 00:26:07.776 19:08:44 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:07.776 19:08:44 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:07.776 19:08:44 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3261932' 00:26:07.776 killing process with pid 3261932 00:26:07.776 19:08:44 nvmf_dif -- common/autotest_common.sh@969 -- # kill 3261932 00:26:07.776 19:08:44 nvmf_dif -- common/autotest_common.sh@974 -- # wait 3261932 00:26:07.776 19:08:44 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:07.776 19:08:44 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:08.035 Waiting for block devices as requested 00:26:08.035 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:08.035 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:08.298 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:08.298 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:08.298 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:08.298 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:08.584 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:08.584 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:08.584 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:26:08.584 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:08.846 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:08.846 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:08.846 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:09.104 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:09.104 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:09.104 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:09.104 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:09.362 19:08:46 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:09.362 19:08:46 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:09.362 19:08:46 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:09.362 19:08:46 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:09.362 19:08:46 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.362 19:08:46 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:09.362 19:08:46 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.263 19:08:48 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:11.263 00:26:11.263 real 1m6.730s 00:26:11.263 user 6m23.287s 00:26:11.263 sys 0m21.250s 00:26:11.263 19:08:48 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:11.263 19:08:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:11.263 ************************************ 00:26:11.263 END TEST nvmf_dif 00:26:11.263 ************************************ 00:26:11.263 19:08:48 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:11.263 19:08:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:11.263 19:08:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:11.263 19:08:48 -- common/autotest_common.sh@10 -- # set +x 00:26:11.521 ************************************ 00:26:11.521 START TEST nvmf_abort_qd_sizes 00:26:11.521 ************************************ 00:26:11.521 19:08:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:11.521 * Looking for test storage... 00:26:11.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:11.521 19:08:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:11.521 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:11.521 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:26:11.522 19:08:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:13.424 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:13.424 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:13.424 Found net devices under 0000:09:00.0: cvl_0_0 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.424 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:13.425 Found net devices under 0000:09:00.1: cvl_0_1 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:13.425 19:08:50 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:13.425 19:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:13.425 19:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:13.683 19:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:13.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:26:13.683 00:26:13.683 --- 10.0.0.2 ping statistics --- 00:26:13.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.683 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:26:13.683 19:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:13.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:26:13.683 00:26:13.683 --- 10.0.0.1 ping statistics --- 00:26:13.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.683 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:26:13.683 19:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.683 19:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:26:13.683 19:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:13.683 19:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:14.617 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:14.617 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:14.875 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:14.875 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:14.875 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:14.875 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:14.875 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:14.875 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:14.875 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:14.875 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:14.875 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:14.875 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:14.875 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:14.875 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:14.875 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:14.875 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:15.810 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:26:16.067 19:08:53 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:16.067 19:08:53 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:16.067 19:08:53 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:16.068 19:08:53 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:16.068 19:08:53 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:16.068 19:08:53 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:16.068 19:08:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:16.068 19:08:53 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:16.068 19:08:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:16.068 19:08:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:16.068 19:08:53 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3272908 00:26:16.068 19:08:53 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:16.068 19:08:53 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3272908 00:26:16.068 19:08:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 3272908 ']' 00:26:16.068 19:08:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.068 19:08:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:16.068 19:08:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.068 19:08:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:16.068 19:08:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:16.068 [2024-07-24 19:08:53.531189] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:26:16.068 [2024-07-24 19:08:53.531263] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:16.068 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.068 [2024-07-24 19:08:53.597314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:16.325 [2024-07-24 19:08:53.714145] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:16.326 [2024-07-24 19:08:53.714201] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:16.326 [2024-07-24 19:08:53.714228] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:16.326 [2024-07-24 19:08:53.714242] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:16.326 [2024-07-24 19:08:53.714254] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:16.326 [2024-07-24 19:08:53.714332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.326 [2024-07-24 19:08:53.714385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:16.326 [2024-07-24 19:08:53.714500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:16.326 [2024-07-24 19:08:53.714503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:0b:00.0 ]] 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:0b:00.0 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:16.890 19:08:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:17.148 ************************************ 00:26:17.148 START TEST spdk_target_abort 00:26:17.148 ************************************ 00:26:17.148 19:08:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:26:17.148 19:08:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:17.148 19:08:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:26:17.148 19:08:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.148 19:08:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:20.426 spdk_targetn1 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:20.426 [2024-07-24 19:08:57.352998] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:20.426 [2024-07-24 19:08:57.385270] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:20.426 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:20.427 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:20.427 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:20.427 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:20.427 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:20.427 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:20.427 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:20.427 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:20.427 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:20.427 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:20.427 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:20.427 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:20.427 19:08:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:20.427 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.956 Initializing NVMe Controllers 00:26:22.956 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:22.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:22.956 Initialization complete. Launching workers. 00:26:22.956 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10769, failed: 0 00:26:22.956 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1275, failed to submit 9494 00:26:22.956 success 793, unsuccess 482, failed 0 00:26:22.956 19:09:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:22.956 19:09:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:22.956 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.234 Initializing NVMe Controllers 00:26:26.234 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:26.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:26.234 Initialization complete. Launching workers. 00:26:26.234 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8971, failed: 0 00:26:26.234 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1232, failed to submit 7739 00:26:26.234 success 281, unsuccess 951, failed 0 00:26:26.234 19:09:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:26.234 19:09:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:26.492 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.770 Initializing NVMe Controllers 00:26:29.770 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:29.770 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:29.770 Initialization complete. Launching workers. 00:26:29.770 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31026, failed: 0 00:26:29.770 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2671, failed to submit 28355 00:26:29.770 success 554, unsuccess 2117, failed 0 00:26:29.770 19:09:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:26:29.770 19:09:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.770 19:09:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:29.770 19:09:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.770 19:09:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:29.770 19:09:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.770 19:09:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:31.143 19:09:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.143 19:09:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3272908 00:26:31.143 19:09:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 3272908 ']' 00:26:31.143 19:09:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 3272908 00:26:31.143 19:09:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:26:31.143 19:09:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:31.143 19:09:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3272908 00:26:31.143 19:09:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:31.143 19:09:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:31.143 19:09:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3272908' 00:26:31.143 killing process with pid 3272908 00:26:31.143 19:09:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 3272908 00:26:31.143 19:09:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 3272908 00:26:31.143 00:26:31.143 real 0m14.211s 00:26:31.143 user 0m55.985s 00:26:31.143 sys 0m2.668s 00:26:31.143 19:09:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:31.143 19:09:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:31.143 ************************************ 00:26:31.143 END TEST spdk_target_abort 00:26:31.143 ************************************ 00:26:31.143 19:09:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:26:31.143 19:09:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:31.143 19:09:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:31.143 19:09:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:31.401 ************************************ 00:26:31.401 START TEST kernel_target_abort 00:26:31.401 ************************************ 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:31.402 19:09:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:32.338 Waiting for block devices as requested 00:26:32.338 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:32.338 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:32.597 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:32.597 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:32.597 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:32.597 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:32.597 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:32.855 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:32.855 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:26:33.113 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:33.114 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:33.114 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:33.114 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:33.377 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:33.377 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:33.377 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:33.377 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:33.647 No valid GPT data, bailing 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:26:33.647 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:26:33.648 00:26:33.648 Discovery Log Number of Records 2, Generation counter 2 00:26:33.648 =====Discovery Log Entry 0====== 00:26:33.648 trtype: tcp 00:26:33.648 adrfam: ipv4 00:26:33.648 subtype: current discovery subsystem 00:26:33.648 treq: not specified, sq flow control disable supported 00:26:33.648 portid: 1 00:26:33.648 trsvcid: 4420 00:26:33.648 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:33.648 traddr: 10.0.0.1 00:26:33.648 eflags: none 00:26:33.648 sectype: none 00:26:33.648 =====Discovery Log Entry 1====== 00:26:33.648 trtype: tcp 00:26:33.648 adrfam: ipv4 00:26:33.648 subtype: nvme subsystem 00:26:33.648 treq: not specified, sq flow control disable supported 00:26:33.648 portid: 1 00:26:33.648 trsvcid: 4420 00:26:33.648 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:33.648 traddr: 10.0.0.1 00:26:33.648 eflags: none 00:26:33.648 sectype: none 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:33.648 19:09:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:33.648 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.963 Initializing NVMe Controllers 00:26:36.963 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:36.964 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:36.964 Initialization complete. Launching workers. 00:26:36.964 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32065, failed: 0 00:26:36.964 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32065, failed to submit 0 00:26:36.964 success 0, unsuccess 32065, failed 0 00:26:36.964 19:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:36.964 19:09:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:36.964 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.243 Initializing NVMe Controllers 00:26:40.243 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:40.243 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:40.243 Initialization complete. Launching workers. 00:26:40.243 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63931, failed: 0 00:26:40.243 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16114, failed to submit 47817 00:26:40.243 success 0, unsuccess 16114, failed 0 00:26:40.243 19:09:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:40.243 19:09:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:40.243 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.521 Initializing NVMe Controllers 00:26:43.521 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:43.521 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:43.521 Initialization complete. Launching workers. 00:26:43.521 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62540, failed: 0 00:26:43.521 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15630, failed to submit 46910 00:26:43.521 success 0, unsuccess 15630, failed 0 00:26:43.521 19:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:26:43.522 19:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:43.522 19:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:26:43.522 19:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:43.522 19:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:43.522 19:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:43.522 19:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:43.522 19:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:43.522 19:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:43.522 19:09:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:44.466 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:44.466 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:44.466 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:44.466 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:44.466 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:44.466 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:44.466 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:44.466 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:44.466 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:44.466 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:44.466 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:44.466 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:44.466 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:44.466 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:44.466 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:44.466 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:45.401 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:26:45.660 00:26:45.660 real 0m14.275s 00:26:45.660 user 0m5.177s 00:26:45.660 sys 0m3.236s 00:26:45.660 19:09:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:45.660 19:09:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:45.660 ************************************ 00:26:45.660 END TEST kernel_target_abort 00:26:45.660 ************************************ 00:26:45.660 19:09:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:45.660 19:09:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:26:45.660 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:45.660 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:26:45.660 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:45.660 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:26:45.660 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:45.660 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:45.660 rmmod nvme_tcp 00:26:45.660 rmmod nvme_fabrics 00:26:45.660 rmmod nvme_keyring 00:26:45.660 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:45.660 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:26:45.660 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:26:45.660 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3272908 ']' 00:26:45.660 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3272908 00:26:45.660 19:09:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 3272908 ']' 00:26:45.660 19:09:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 3272908 00:26:45.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3272908) - No such process 00:26:45.660 19:09:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 3272908 is not found' 00:26:45.660 Process with pid 3272908 is not found 00:26:45.660 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:45.660 19:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:46.596 Waiting for block devices as requested 00:26:46.855 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:46.855 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:46.855 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:47.114 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:47.114 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:47.114 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:47.114 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:47.373 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:47.373 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:26:47.373 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:47.633 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:47.633 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:47.633 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:47.633 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:47.891 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:47.891 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:47.891 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:48.150 19:09:25 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:48.150 19:09:25 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:48.150 19:09:25 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:48.150 19:09:25 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:48.150 19:09:25 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.150 19:09:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:48.150 19:09:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.051 19:09:27 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:50.051 00:26:50.051 real 0m38.664s 00:26:50.051 user 1m3.500s 00:26:50.051 sys 0m9.308s 00:26:50.051 19:09:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:50.051 19:09:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:50.051 ************************************ 00:26:50.051 END TEST nvmf_abort_qd_sizes 00:26:50.051 ************************************ 00:26:50.051 19:09:27 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:26:50.051 19:09:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:50.051 19:09:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:50.051 19:09:27 -- common/autotest_common.sh@10 -- # set +x 00:26:50.051 ************************************ 00:26:50.051 START TEST keyring_file 00:26:50.051 ************************************ 00:26:50.051 19:09:27 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:26:50.051 * Looking for test storage... 00:26:50.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:26:50.051 19:09:27 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:26:50.051 19:09:27 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:50.051 19:09:27 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:50.310 19:09:27 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:50.310 19:09:27 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:50.310 19:09:27 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:50.310 19:09:27 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.310 19:09:27 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.310 19:09:27 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.310 19:09:27 keyring_file -- paths/export.sh@5 -- # export PATH 00:26:50.310 19:09:27 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@47 -- # : 0 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:50.310 19:09:27 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:50.310 19:09:27 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:26:50.310 19:09:27 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:26:50.311 19:09:27 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:26:50.311 19:09:27 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:26:50.311 19:09:27 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:26:50.311 19:09:27 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:26:50.311 19:09:27 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:50.311 19:09:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:50.311 19:09:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:26:50.311 19:09:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:50.311 19:09:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:50.311 19:09:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:50.311 19:09:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.fVQfHAuOpz 00:26:50.311 19:09:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:50.311 19:09:27 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:50.311 19:09:27 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:26:50.311 19:09:27 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:50.311 19:09:27 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:26:50.311 19:09:27 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:26:50.311 19:09:27 keyring_file -- nvmf/common.sh@705 -- # python - 00:26:50.311 19:09:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fVQfHAuOpz 00:26:50.311 19:09:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.fVQfHAuOpz 00:26:50.311 19:09:27 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.fVQfHAuOpz 00:26:50.311 19:09:27 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:26:50.311 19:09:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:50.311 19:09:27 keyring_file -- keyring/common.sh@17 -- # name=key1 00:26:50.311 19:09:27 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:26:50.311 19:09:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:50.311 19:09:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:50.311 19:09:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BRlQ19mbvw 00:26:50.311 19:09:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:26:50.311 19:09:27 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:26:50.311 19:09:27 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:26:50.311 19:09:27 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:50.311 19:09:27 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:26:50.311 19:09:27 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:26:50.311 19:09:27 keyring_file -- nvmf/common.sh@705 -- # python - 00:26:50.311 19:09:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BRlQ19mbvw 00:26:50.311 19:09:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BRlQ19mbvw 00:26:50.311 19:09:27 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.BRlQ19mbvw 00:26:50.311 19:09:27 keyring_file -- keyring/file.sh@30 -- # tgtpid=3279417 00:26:50.311 19:09:27 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:26:50.311 19:09:27 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3279417 00:26:50.311 19:09:27 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3279417 ']' 00:26:50.311 19:09:27 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.311 19:09:27 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:50.311 19:09:27 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.311 19:09:27 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:50.311 19:09:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:50.311 [2024-07-24 19:09:27.816175] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:26:50.311 [2024-07-24 19:09:27.816269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279417 ] 00:26:50.311 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.311 [2024-07-24 19:09:27.872766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.569 [2024-07-24 19:09:27.981210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:26:50.828 19:09:28 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:50.828 [2024-07-24 19:09:28.254165] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.828 null0 00:26:50.828 [2024-07-24 19:09:28.286221] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:50.828 [2024-07-24 19:09:28.286709] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:50.828 [2024-07-24 19:09:28.294211] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.828 19:09:28 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:50.828 [2024-07-24 19:09:28.306238] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:26:50.828 request: 00:26:50.828 { 00:26:50.828 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:26:50.828 "secure_channel": false, 00:26:50.828 "listen_address": { 00:26:50.828 "trtype": "tcp", 00:26:50.828 "traddr": "127.0.0.1", 00:26:50.828 "trsvcid": "4420" 00:26:50.828 }, 00:26:50.828 "method": "nvmf_subsystem_add_listener", 00:26:50.828 "req_id": 1 00:26:50.828 } 00:26:50.828 Got JSON-RPC error response 00:26:50.828 response: 00:26:50.828 { 00:26:50.828 "code": -32602, 00:26:50.828 "message": "Invalid parameters" 00:26:50.828 } 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:50.828 19:09:28 keyring_file -- keyring/file.sh@46 -- # bperfpid=3279437 00:26:50.828 19:09:28 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:26:50.828 19:09:28 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3279437 /var/tmp/bperf.sock 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3279437 ']' 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:50.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:50.828 19:09:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:50.828 [2024-07-24 19:09:28.352219] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:26:50.828 [2024-07-24 19:09:28.352293] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279437 ] 00:26:50.828 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.828 [2024-07-24 19:09:28.414697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.087 [2024-07-24 19:09:28.533250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.020 19:09:29 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:52.020 19:09:29 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:26:52.020 19:09:29 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fVQfHAuOpz 00:26:52.020 19:09:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fVQfHAuOpz 00:26:52.020 19:09:29 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BRlQ19mbvw 00:26:52.020 19:09:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BRlQ19mbvw 00:26:52.277 19:09:29 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:26:52.278 19:09:29 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:26:52.278 19:09:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:52.278 19:09:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:52.278 19:09:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:52.536 19:09:30 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.fVQfHAuOpz == \/\t\m\p\/\t\m\p\.\f\V\Q\f\H\A\u\O\p\z ]] 00:26:52.536 19:09:30 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:26:52.536 19:09:30 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:26:52.536 19:09:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:52.536 19:09:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:52.536 19:09:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:52.794 19:09:30 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.BRlQ19mbvw == \/\t\m\p\/\t\m\p\.\B\R\l\Q\1\9\m\b\v\w ]] 00:26:52.794 19:09:30 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:26:52.794 19:09:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:52.794 19:09:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:52.794 19:09:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:52.794 19:09:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:52.794 19:09:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:53.052 19:09:30 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:26:53.052 19:09:30 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:26:53.052 19:09:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:53.052 19:09:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:53.052 19:09:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:53.052 19:09:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:53.052 19:09:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:53.309 19:09:30 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:26:53.309 19:09:30 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:53.309 19:09:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:53.566 [2024-07-24 19:09:31.047772] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:53.566 nvme0n1 00:26:53.566 19:09:31 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:26:53.566 19:09:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:53.566 19:09:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:53.566 19:09:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:53.566 19:09:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:53.566 19:09:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:53.823 19:09:31 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:26:53.823 19:09:31 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:26:53.823 19:09:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:53.823 19:09:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:53.823 19:09:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:53.823 19:09:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:53.823 19:09:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:54.081 19:09:31 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:26:54.081 19:09:31 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:54.338 Running I/O for 1 seconds... 00:26:55.270 00:26:55.270 Latency(us) 00:26:55.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.270 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:26:55.270 nvme0n1 : 1.02 4825.72 18.85 0.00 0.00 26240.33 4174.89 30486.38 00:26:55.270 =================================================================================================================== 00:26:55.270 Total : 4825.72 18.85 0.00 0.00 26240.33 4174.89 30486.38 00:26:55.270 0 00:26:55.270 19:09:32 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:55.271 19:09:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:55.528 19:09:33 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:26:55.528 19:09:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:55.528 19:09:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:55.528 19:09:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:55.528 19:09:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:55.528 19:09:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:55.786 19:09:33 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:26:55.786 19:09:33 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:26:55.786 19:09:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:55.786 19:09:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:55.786 19:09:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:55.786 19:09:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:55.786 19:09:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:56.042 19:09:33 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:26:56.042 19:09:33 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:56.042 19:09:33 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:26:56.042 19:09:33 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:56.042 19:09:33 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:26:56.042 19:09:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:56.042 19:09:33 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:26:56.042 19:09:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:56.042 19:09:33 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:56.042 19:09:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:56.299 [2024-07-24 19:09:33.764737] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:56.299 [2024-07-24 19:09:33.765251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11899a0 (107): Transport endpoint is not connected 00:26:56.299 [2024-07-24 19:09:33.766243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11899a0 (9): Bad file descriptor 00:26:56.299 [2024-07-24 19:09:33.767241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:56.299 [2024-07-24 19:09:33.767261] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:26:56.299 [2024-07-24 19:09:33.767275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:56.299 request: 00:26:56.299 { 00:26:56.299 "name": "nvme0", 00:26:56.299 "trtype": "tcp", 00:26:56.299 "traddr": "127.0.0.1", 00:26:56.299 "adrfam": "ipv4", 00:26:56.299 "trsvcid": "4420", 00:26:56.299 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:56.299 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:56.299 "prchk_reftag": false, 00:26:56.299 "prchk_guard": false, 00:26:56.299 "hdgst": false, 00:26:56.299 "ddgst": false, 00:26:56.299 "psk": "key1", 00:26:56.299 "method": "bdev_nvme_attach_controller", 00:26:56.299 "req_id": 1 00:26:56.299 } 00:26:56.299 Got JSON-RPC error response 00:26:56.299 response: 00:26:56.299 { 00:26:56.299 "code": -5, 00:26:56.299 "message": "Input/output error" 00:26:56.299 } 00:26:56.299 19:09:33 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:26:56.299 19:09:33 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:56.299 19:09:33 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:56.299 19:09:33 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:56.299 19:09:33 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:26:56.299 19:09:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:56.299 19:09:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:56.299 19:09:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:56.299 19:09:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:56.299 19:09:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:56.557 19:09:34 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:26:56.557 19:09:34 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:26:56.557 19:09:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:56.557 19:09:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:56.557 19:09:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:56.557 19:09:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:56.557 19:09:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:56.846 19:09:34 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:26:56.846 19:09:34 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:26:56.846 19:09:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:57.103 19:09:34 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:26:57.103 19:09:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:26:57.361 19:09:34 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:26:57.361 19:09:34 keyring_file -- keyring/file.sh@77 -- # jq length 00:26:57.361 19:09:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:57.619 19:09:35 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:26:57.619 19:09:35 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.fVQfHAuOpz 00:26:57.620 19:09:35 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.fVQfHAuOpz 00:26:57.620 19:09:35 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:26:57.620 19:09:35 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.fVQfHAuOpz 00:26:57.620 19:09:35 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:26:57.620 19:09:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:57.620 19:09:35 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:26:57.620 19:09:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:57.620 19:09:35 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fVQfHAuOpz 00:26:57.620 19:09:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fVQfHAuOpz 00:26:57.878 [2024-07-24 19:09:35.255044] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.fVQfHAuOpz': 0100660 00:26:57.878 [2024-07-24 19:09:35.255089] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:26:57.878 request: 00:26:57.878 { 00:26:57.878 "name": "key0", 00:26:57.878 "path": "/tmp/tmp.fVQfHAuOpz", 00:26:57.878 "method": "keyring_file_add_key", 00:26:57.878 "req_id": 1 00:26:57.878 } 00:26:57.878 Got JSON-RPC error response 00:26:57.878 response: 00:26:57.878 { 00:26:57.878 "code": -1, 00:26:57.878 "message": "Operation not permitted" 00:26:57.878 } 00:26:57.878 19:09:35 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:26:57.878 19:09:35 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:57.878 19:09:35 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:57.878 19:09:35 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:57.878 19:09:35 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.fVQfHAuOpz 00:26:57.878 19:09:35 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fVQfHAuOpz 00:26:57.878 19:09:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fVQfHAuOpz 00:26:58.136 19:09:35 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.fVQfHAuOpz 00:26:58.136 19:09:35 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:26:58.136 19:09:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:58.136 19:09:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:58.136 19:09:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:58.136 19:09:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:58.136 19:09:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:58.394 19:09:35 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:26:58.394 19:09:35 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:58.394 19:09:35 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:26:58.394 19:09:35 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:58.394 19:09:35 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:26:58.394 19:09:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:58.394 19:09:35 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:26:58.394 19:09:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:58.394 19:09:35 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:58.394 19:09:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:58.652 [2024-07-24 19:09:36.009094] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.fVQfHAuOpz': No such file or directory 00:26:58.652 [2024-07-24 19:09:36.009155] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:26:58.652 [2024-07-24 19:09:36.009182] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:26:58.652 [2024-07-24 19:09:36.009193] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:58.652 [2024-07-24 19:09:36.009204] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:26:58.652 request: 00:26:58.652 { 00:26:58.652 "name": "nvme0", 00:26:58.652 "trtype": "tcp", 00:26:58.652 "traddr": "127.0.0.1", 00:26:58.652 "adrfam": "ipv4", 00:26:58.652 "trsvcid": "4420", 00:26:58.652 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:58.652 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:58.652 "prchk_reftag": false, 00:26:58.652 "prchk_guard": false, 00:26:58.652 "hdgst": false, 00:26:58.652 "ddgst": false, 00:26:58.652 "psk": "key0", 00:26:58.652 "method": "bdev_nvme_attach_controller", 00:26:58.652 "req_id": 1 00:26:58.652 } 00:26:58.652 Got JSON-RPC error response 00:26:58.652 response: 00:26:58.652 { 00:26:58.652 "code": -19, 00:26:58.652 "message": "No such device" 00:26:58.652 } 00:26:58.652 19:09:36 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:26:58.652 19:09:36 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:58.652 19:09:36 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:58.652 19:09:36 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:58.652 19:09:36 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:26:58.652 19:09:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:58.910 19:09:36 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:58.910 19:09:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:58.910 19:09:36 keyring_file -- keyring/common.sh@17 -- # name=key0 00:26:58.910 19:09:36 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:58.910 19:09:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:58.910 19:09:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:58.910 19:09:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.hcIMxVRNUl 00:26:58.910 19:09:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:58.910 19:09:36 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:58.910 19:09:36 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:26:58.910 19:09:36 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:58.910 19:09:36 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:26:58.910 19:09:36 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:26:58.910 19:09:36 keyring_file -- nvmf/common.sh@705 -- # python - 00:26:58.910 19:09:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.hcIMxVRNUl 00:26:58.910 19:09:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.hcIMxVRNUl 00:26:58.910 19:09:36 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.hcIMxVRNUl 00:26:58.910 19:09:36 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hcIMxVRNUl 00:26:58.910 19:09:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hcIMxVRNUl 00:26:59.167 19:09:36 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:59.167 19:09:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:59.425 nvme0n1 00:26:59.425 19:09:36 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:26:59.425 19:09:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:59.425 19:09:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:59.425 19:09:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:59.425 19:09:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:59.425 19:09:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:59.683 19:09:37 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:26:59.683 19:09:37 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:26:59.683 19:09:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:59.940 19:09:37 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:26:59.940 19:09:37 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:26:59.940 19:09:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:59.940 19:09:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:59.940 19:09:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:00.198 19:09:37 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:00.198 19:09:37 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:00.198 19:09:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:00.198 19:09:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:00.198 19:09:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:00.198 19:09:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:00.198 19:09:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:00.456 19:09:37 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:00.456 19:09:37 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:00.456 19:09:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:00.713 19:09:38 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:00.714 19:09:38 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:00.714 19:09:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:00.972 19:09:38 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:00.972 19:09:38 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hcIMxVRNUl 00:27:00.972 19:09:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hcIMxVRNUl 00:27:01.230 19:09:38 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BRlQ19mbvw 00:27:01.230 19:09:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BRlQ19mbvw 00:27:01.488 19:09:38 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:01.488 19:09:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:01.746 nvme0n1 00:27:01.747 19:09:39 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:01.747 19:09:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:02.006 19:09:39 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:02.006 "subsystems": [ 00:27:02.006 { 00:27:02.006 "subsystem": "keyring", 00:27:02.006 "config": [ 00:27:02.006 { 00:27:02.006 "method": "keyring_file_add_key", 00:27:02.006 "params": { 00:27:02.006 "name": "key0", 00:27:02.006 "path": "/tmp/tmp.hcIMxVRNUl" 00:27:02.006 } 00:27:02.006 }, 00:27:02.006 { 00:27:02.006 "method": "keyring_file_add_key", 00:27:02.006 "params": { 00:27:02.006 "name": "key1", 00:27:02.006 "path": "/tmp/tmp.BRlQ19mbvw" 00:27:02.006 } 00:27:02.006 } 00:27:02.006 ] 00:27:02.006 }, 00:27:02.006 { 00:27:02.006 "subsystem": "iobuf", 00:27:02.006 "config": [ 00:27:02.006 { 00:27:02.006 "method": "iobuf_set_options", 00:27:02.006 "params": { 00:27:02.006 "small_pool_count": 8192, 00:27:02.006 "large_pool_count": 1024, 00:27:02.006 "small_bufsize": 8192, 00:27:02.006 "large_bufsize": 135168 00:27:02.006 } 00:27:02.006 } 00:27:02.006 ] 00:27:02.006 }, 00:27:02.006 { 00:27:02.006 "subsystem": "sock", 00:27:02.006 "config": [ 00:27:02.006 { 00:27:02.006 "method": "sock_set_default_impl", 00:27:02.006 "params": { 00:27:02.006 "impl_name": "posix" 00:27:02.006 } 00:27:02.006 }, 00:27:02.006 { 00:27:02.006 "method": "sock_impl_set_options", 00:27:02.006 "params": { 00:27:02.006 "impl_name": "ssl", 00:27:02.006 "recv_buf_size": 4096, 00:27:02.006 "send_buf_size": 4096, 00:27:02.006 "enable_recv_pipe": true, 00:27:02.006 "enable_quickack": false, 00:27:02.006 "enable_placement_id": 0, 00:27:02.006 "enable_zerocopy_send_server": true, 00:27:02.006 "enable_zerocopy_send_client": false, 00:27:02.006 "zerocopy_threshold": 0, 00:27:02.006 "tls_version": 0, 00:27:02.006 "enable_ktls": false 00:27:02.006 } 00:27:02.006 }, 00:27:02.006 { 00:27:02.006 "method": "sock_impl_set_options", 00:27:02.006 "params": { 00:27:02.006 "impl_name": "posix", 00:27:02.006 "recv_buf_size": 2097152, 00:27:02.006 "send_buf_size": 2097152, 00:27:02.006 "enable_recv_pipe": true, 00:27:02.006 "enable_quickack": false, 00:27:02.006 "enable_placement_id": 0, 00:27:02.006 "enable_zerocopy_send_server": true, 00:27:02.006 "enable_zerocopy_send_client": false, 00:27:02.006 "zerocopy_threshold": 0, 00:27:02.006 "tls_version": 0, 00:27:02.006 "enable_ktls": false 00:27:02.006 } 00:27:02.006 } 00:27:02.006 ] 00:27:02.006 }, 00:27:02.006 { 00:27:02.006 "subsystem": "vmd", 00:27:02.006 "config": [] 00:27:02.006 }, 00:27:02.006 { 00:27:02.006 "subsystem": "accel", 00:27:02.006 "config": [ 00:27:02.006 { 00:27:02.006 "method": "accel_set_options", 00:27:02.006 "params": { 00:27:02.006 "small_cache_size": 128, 00:27:02.006 "large_cache_size": 16, 00:27:02.006 "task_count": 2048, 00:27:02.006 "sequence_count": 2048, 00:27:02.006 "buf_count": 2048 00:27:02.006 } 00:27:02.006 } 00:27:02.006 ] 00:27:02.006 }, 00:27:02.006 { 00:27:02.006 "subsystem": "bdev", 00:27:02.006 "config": [ 00:27:02.006 { 00:27:02.006 "method": "bdev_set_options", 00:27:02.006 "params": { 00:27:02.006 "bdev_io_pool_size": 65535, 00:27:02.006 "bdev_io_cache_size": 256, 00:27:02.006 "bdev_auto_examine": true, 00:27:02.006 "iobuf_small_cache_size": 128, 00:27:02.006 "iobuf_large_cache_size": 16 00:27:02.006 } 00:27:02.006 }, 00:27:02.006 { 00:27:02.006 "method": "bdev_raid_set_options", 00:27:02.006 "params": { 00:27:02.006 "process_window_size_kb": 1024, 00:27:02.006 "process_max_bandwidth_mb_sec": 0 00:27:02.006 } 00:27:02.006 }, 00:27:02.006 { 00:27:02.006 "method": "bdev_iscsi_set_options", 00:27:02.006 "params": { 00:27:02.006 "timeout_sec": 30 00:27:02.006 } 00:27:02.006 }, 00:27:02.006 { 00:27:02.006 "method": "bdev_nvme_set_options", 00:27:02.006 "params": { 00:27:02.006 "action_on_timeout": "none", 00:27:02.006 "timeout_us": 0, 00:27:02.006 "timeout_admin_us": 0, 00:27:02.006 "keep_alive_timeout_ms": 10000, 00:27:02.006 "arbitration_burst": 0, 00:27:02.006 "low_priority_weight": 0, 00:27:02.006 "medium_priority_weight": 0, 00:27:02.006 "high_priority_weight": 0, 00:27:02.006 "nvme_adminq_poll_period_us": 10000, 00:27:02.006 "nvme_ioq_poll_period_us": 0, 00:27:02.006 "io_queue_requests": 512, 00:27:02.006 "delay_cmd_submit": true, 00:27:02.006 "transport_retry_count": 4, 00:27:02.006 "bdev_retry_count": 3, 00:27:02.006 "transport_ack_timeout": 0, 00:27:02.006 "ctrlr_loss_timeout_sec": 0, 00:27:02.006 "reconnect_delay_sec": 0, 00:27:02.006 "fast_io_fail_timeout_sec": 0, 00:27:02.006 "disable_auto_failback": false, 00:27:02.006 "generate_uuids": false, 00:27:02.006 "transport_tos": 0, 00:27:02.006 "nvme_error_stat": false, 00:27:02.006 "rdma_srq_size": 0, 00:27:02.006 "io_path_stat": false, 00:27:02.006 "allow_accel_sequence": false, 00:27:02.007 "rdma_max_cq_size": 0, 00:27:02.007 "rdma_cm_event_timeout_ms": 0, 00:27:02.007 "dhchap_digests": [ 00:27:02.007 "sha256", 00:27:02.007 "sha384", 00:27:02.007 "sha512" 00:27:02.007 ], 00:27:02.007 "dhchap_dhgroups": [ 00:27:02.007 "null", 00:27:02.007 "ffdhe2048", 00:27:02.007 "ffdhe3072", 00:27:02.007 "ffdhe4096", 00:27:02.007 "ffdhe6144", 00:27:02.007 "ffdhe8192" 00:27:02.007 ] 00:27:02.007 } 00:27:02.007 }, 00:27:02.007 { 00:27:02.007 "method": "bdev_nvme_attach_controller", 00:27:02.007 "params": { 00:27:02.007 "name": "nvme0", 00:27:02.007 "trtype": "TCP", 00:27:02.007 "adrfam": "IPv4", 00:27:02.007 "traddr": "127.0.0.1", 00:27:02.007 "trsvcid": "4420", 00:27:02.007 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:02.007 "prchk_reftag": false, 00:27:02.007 "prchk_guard": false, 00:27:02.007 "ctrlr_loss_timeout_sec": 0, 00:27:02.007 "reconnect_delay_sec": 0, 00:27:02.007 "fast_io_fail_timeout_sec": 0, 00:27:02.007 "psk": "key0", 00:27:02.007 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:02.007 "hdgst": false, 00:27:02.007 "ddgst": false 00:27:02.007 } 00:27:02.007 }, 00:27:02.007 { 00:27:02.007 "method": "bdev_nvme_set_hotplug", 00:27:02.007 "params": { 00:27:02.007 "period_us": 100000, 00:27:02.007 "enable": false 00:27:02.007 } 00:27:02.007 }, 00:27:02.007 { 00:27:02.007 "method": "bdev_wait_for_examine" 00:27:02.007 } 00:27:02.007 ] 00:27:02.007 }, 00:27:02.007 { 00:27:02.007 "subsystem": "nbd", 00:27:02.007 "config": [] 00:27:02.007 } 00:27:02.007 ] 00:27:02.007 }' 00:27:02.007 19:09:39 keyring_file -- keyring/file.sh@114 -- # killprocess 3279437 00:27:02.007 19:09:39 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3279437 ']' 00:27:02.007 19:09:39 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3279437 00:27:02.007 19:09:39 keyring_file -- common/autotest_common.sh@955 -- # uname 00:27:02.007 19:09:39 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:02.007 19:09:39 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3279437 00:27:02.007 19:09:39 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:02.007 19:09:39 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:02.007 19:09:39 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3279437' 00:27:02.007 killing process with pid 3279437 00:27:02.007 19:09:39 keyring_file -- common/autotest_common.sh@969 -- # kill 3279437 00:27:02.007 Received shutdown signal, test time was about 1.000000 seconds 00:27:02.007 00:27:02.007 Latency(us) 00:27:02.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.007 =================================================================================================================== 00:27:02.007 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:02.007 19:09:39 keyring_file -- common/autotest_common.sh@974 -- # wait 3279437 00:27:02.265 19:09:39 keyring_file -- keyring/file.sh@117 -- # bperfpid=3280902 00:27:02.265 19:09:39 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3280902 /var/tmp/bperf.sock 00:27:02.265 19:09:39 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3280902 ']' 00:27:02.265 19:09:39 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:02.265 19:09:39 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:02.265 19:09:39 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:02.265 19:09:39 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:02.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:02.265 19:09:39 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:02.265 "subsystems": [ 00:27:02.265 { 00:27:02.265 "subsystem": "keyring", 00:27:02.265 "config": [ 00:27:02.265 { 00:27:02.265 "method": "keyring_file_add_key", 00:27:02.265 "params": { 00:27:02.265 "name": "key0", 00:27:02.265 "path": "/tmp/tmp.hcIMxVRNUl" 00:27:02.265 } 00:27:02.265 }, 00:27:02.265 { 00:27:02.265 "method": "keyring_file_add_key", 00:27:02.265 "params": { 00:27:02.265 "name": "key1", 00:27:02.265 "path": "/tmp/tmp.BRlQ19mbvw" 00:27:02.265 } 00:27:02.265 } 00:27:02.265 ] 00:27:02.265 }, 00:27:02.265 { 00:27:02.265 "subsystem": "iobuf", 00:27:02.265 "config": [ 00:27:02.265 { 00:27:02.265 "method": "iobuf_set_options", 00:27:02.265 "params": { 00:27:02.265 "small_pool_count": 8192, 00:27:02.265 "large_pool_count": 1024, 00:27:02.265 "small_bufsize": 8192, 00:27:02.265 "large_bufsize": 135168 00:27:02.265 } 00:27:02.265 } 00:27:02.265 ] 00:27:02.265 }, 00:27:02.265 { 00:27:02.265 "subsystem": "sock", 00:27:02.265 "config": [ 00:27:02.265 { 00:27:02.265 "method": "sock_set_default_impl", 00:27:02.265 "params": { 00:27:02.265 "impl_name": "posix" 00:27:02.265 } 00:27:02.265 }, 00:27:02.265 { 00:27:02.265 "method": "sock_impl_set_options", 00:27:02.265 "params": { 00:27:02.265 "impl_name": "ssl", 00:27:02.265 "recv_buf_size": 4096, 00:27:02.265 "send_buf_size": 4096, 00:27:02.265 "enable_recv_pipe": true, 00:27:02.265 "enable_quickack": false, 00:27:02.265 "enable_placement_id": 0, 00:27:02.265 "enable_zerocopy_send_server": true, 00:27:02.265 "enable_zerocopy_send_client": false, 00:27:02.265 "zerocopy_threshold": 0, 00:27:02.265 "tls_version": 0, 00:27:02.265 "enable_ktls": false 00:27:02.265 } 00:27:02.265 }, 00:27:02.265 { 00:27:02.265 "method": "sock_impl_set_options", 00:27:02.265 "params": { 00:27:02.265 "impl_name": "posix", 00:27:02.265 "recv_buf_size": 2097152, 00:27:02.265 "send_buf_size": 2097152, 00:27:02.265 "enable_recv_pipe": true, 00:27:02.265 "enable_quickack": false, 00:27:02.265 "enable_placement_id": 0, 00:27:02.265 "enable_zerocopy_send_server": true, 00:27:02.265 "enable_zerocopy_send_client": false, 00:27:02.265 "zerocopy_threshold": 0, 00:27:02.265 "tls_version": 0, 00:27:02.265 "enable_ktls": false 00:27:02.265 } 00:27:02.265 } 00:27:02.265 ] 00:27:02.265 }, 00:27:02.265 { 00:27:02.265 "subsystem": "vmd", 00:27:02.265 "config": [] 00:27:02.265 }, 00:27:02.265 { 00:27:02.265 "subsystem": "accel", 00:27:02.265 "config": [ 00:27:02.265 { 00:27:02.265 "method": "accel_set_options", 00:27:02.265 "params": { 00:27:02.265 "small_cache_size": 128, 00:27:02.265 "large_cache_size": 16, 00:27:02.265 "task_count": 2048, 00:27:02.265 "sequence_count": 2048, 00:27:02.265 "buf_count": 2048 00:27:02.265 } 00:27:02.265 } 00:27:02.265 ] 00:27:02.265 }, 00:27:02.265 { 00:27:02.265 "subsystem": "bdev", 00:27:02.265 "config": [ 00:27:02.265 { 00:27:02.265 "method": "bdev_set_options", 00:27:02.265 "params": { 00:27:02.265 "bdev_io_pool_size": 65535, 00:27:02.265 "bdev_io_cache_size": 256, 00:27:02.265 "bdev_auto_examine": true, 00:27:02.265 "iobuf_small_cache_size": 128, 00:27:02.265 "iobuf_large_cache_size": 16 00:27:02.265 } 00:27:02.265 }, 00:27:02.265 { 00:27:02.265 "method": "bdev_raid_set_options", 00:27:02.265 "params": { 00:27:02.265 "process_window_size_kb": 1024, 00:27:02.265 "process_max_bandwidth_mb_sec": 0 00:27:02.265 } 00:27:02.265 }, 00:27:02.265 { 00:27:02.265 "method": "bdev_iscsi_set_options", 00:27:02.265 "params": { 00:27:02.265 "timeout_sec": 30 00:27:02.265 } 00:27:02.265 }, 00:27:02.265 { 00:27:02.265 "method": "bdev_nvme_set_options", 00:27:02.265 "params": { 00:27:02.265 "action_on_timeout": "none", 00:27:02.265 "timeout_us": 0, 00:27:02.265 "timeout_admin_us": 0, 00:27:02.265 "keep_alive_timeout_ms": 10000, 00:27:02.265 "arbitration_burst": 0, 00:27:02.265 "low_priority_weight": 0, 00:27:02.265 "medium_priority_weight": 0, 00:27:02.265 "high_priority_weight": 0, 00:27:02.265 "nvme_adminq_poll_period_us": 10000, 00:27:02.265 "nvme_ioq_poll_period_us": 0, 00:27:02.265 "io_queue_requests": 512, 00:27:02.265 "delay_cmd_submit": true, 00:27:02.265 "transport_retry_count": 4, 00:27:02.265 "bdev_retry_count": 3, 00:27:02.265 "transport_ack_timeout": 0, 00:27:02.265 "ctrlr_loss_timeout_sec": 0, 00:27:02.265 "reconnect_delay_sec": 0, 00:27:02.265 "fast_io_fail_timeout_sec": 0, 00:27:02.265 "disable_auto_failback": false, 00:27:02.265 "generate_uuids": false, 00:27:02.265 "transport_tos": 0, 00:27:02.265 "nvme_error_stat": false, 00:27:02.265 "rdma_srq_size": 0, 00:27:02.265 "io_path_stat": false, 00:27:02.265 "allow_accel_sequence": false, 00:27:02.265 "rdma_max_cq_size": 0, 00:27:02.265 "rdma_cm_event_timeout_ms": 0, 00:27:02.265 "dhchap_digests": [ 00:27:02.265 "sha256", 00:27:02.265 "sha384", 00:27:02.265 "sha512" 00:27:02.265 ], 00:27:02.265 "dhchap_dhgroups": [ 00:27:02.265 "null", 00:27:02.265 "ffdhe2048", 00:27:02.265 "ffdhe3072", 00:27:02.265 "ffdhe4096", 00:27:02.265 "ffdhe6144", 00:27:02.265 "ffdhe8192" 00:27:02.265 ] 00:27:02.265 } 00:27:02.265 }, 00:27:02.265 { 00:27:02.265 "method": "bdev_nvme_attach_controller", 00:27:02.265 "params": { 00:27:02.265 "name": "nvme0", 00:27:02.265 "trtype": "TCP", 00:27:02.265 "adrfam": "IPv4", 00:27:02.265 "traddr": "127.0.0.1", 00:27:02.265 "trsvcid": "4420", 00:27:02.265 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:02.265 "prchk_reftag": false, 00:27:02.265 "prchk_guard": false, 00:27:02.265 "ctrlr_loss_timeout_sec": 0, 00:27:02.265 "reconnect_delay_sec": 0, 00:27:02.265 "fast_io_fail_timeout_sec": 0, 00:27:02.265 "psk": "key0", 00:27:02.265 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:02.265 "hdgst": false, 00:27:02.265 "ddgst": false 00:27:02.265 } 00:27:02.265 }, 00:27:02.265 { 00:27:02.265 "method": "bdev_nvme_set_hotplug", 00:27:02.265 "params": { 00:27:02.265 "period_us": 100000, 00:27:02.265 "enable": false 00:27:02.265 } 00:27:02.265 }, 00:27:02.265 { 00:27:02.265 "method": "bdev_wait_for_examine" 00:27:02.265 } 00:27:02.265 ] 00:27:02.265 }, 00:27:02.265 { 00:27:02.265 "subsystem": "nbd", 00:27:02.265 "config": [] 00:27:02.265 } 00:27:02.265 ] 00:27:02.265 }' 00:27:02.265 19:09:39 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:02.265 19:09:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:02.265 [2024-07-24 19:09:39.851989] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:27:02.265 [2024-07-24 19:09:39.852071] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280902 ] 00:27:02.523 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.523 [2024-07-24 19:09:39.912574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.523 [2024-07-24 19:09:40.031891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.783 [2024-07-24 19:09:40.229240] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:03.350 19:09:40 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:03.350 19:09:40 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:27:03.350 19:09:40 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:03.350 19:09:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:03.350 19:09:40 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:03.609 19:09:41 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:03.609 19:09:41 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:03.609 19:09:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:03.609 19:09:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:03.609 19:09:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:03.609 19:09:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:03.609 19:09:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:03.867 19:09:41 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:03.867 19:09:41 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:03.867 19:09:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:03.867 19:09:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:03.867 19:09:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:03.867 19:09:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:03.868 19:09:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:04.125 19:09:41 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:04.125 19:09:41 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:04.125 19:09:41 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:04.125 19:09:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:04.384 19:09:41 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:04.384 19:09:41 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:04.384 19:09:41 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.hcIMxVRNUl /tmp/tmp.BRlQ19mbvw 00:27:04.384 19:09:41 keyring_file -- keyring/file.sh@20 -- # killprocess 3280902 00:27:04.384 19:09:41 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3280902 ']' 00:27:04.384 19:09:41 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3280902 00:27:04.384 19:09:41 keyring_file -- common/autotest_common.sh@955 -- # uname 00:27:04.384 19:09:41 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:04.384 19:09:41 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3280902 00:27:04.384 19:09:41 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:04.384 19:09:41 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:04.384 19:09:41 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3280902' 00:27:04.384 killing process with pid 3280902 00:27:04.384 19:09:41 keyring_file -- common/autotest_common.sh@969 -- # kill 3280902 00:27:04.384 Received shutdown signal, test time was about 1.000000 seconds 00:27:04.384 00:27:04.384 Latency(us) 00:27:04.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.384 =================================================================================================================== 00:27:04.384 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:04.384 19:09:41 keyring_file -- common/autotest_common.sh@974 -- # wait 3280902 00:27:04.644 19:09:42 keyring_file -- keyring/file.sh@21 -- # killprocess 3279417 00:27:04.644 19:09:42 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3279417 ']' 00:27:04.644 19:09:42 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3279417 00:27:04.644 19:09:42 keyring_file -- common/autotest_common.sh@955 -- # uname 00:27:04.644 19:09:42 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:04.644 19:09:42 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3279417 00:27:04.644 19:09:42 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:04.644 19:09:42 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:04.644 19:09:42 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3279417' 00:27:04.644 killing process with pid 3279417 00:27:04.644 19:09:42 keyring_file -- common/autotest_common.sh@969 -- # kill 3279417 00:27:04.644 [2024-07-24 19:09:42.136311] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:04.644 19:09:42 keyring_file -- common/autotest_common.sh@974 -- # wait 3279417 00:27:05.212 00:27:05.212 real 0m14.995s 00:27:05.212 user 0m36.862s 00:27:05.212 sys 0m3.294s 00:27:05.212 19:09:42 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:05.212 19:09:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:05.212 ************************************ 00:27:05.212 END TEST keyring_file 00:27:05.212 ************************************ 00:27:05.212 19:09:42 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:27:05.212 19:09:42 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:05.212 19:09:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:05.212 19:09:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:05.212 19:09:42 -- common/autotest_common.sh@10 -- # set +x 00:27:05.212 ************************************ 00:27:05.212 START TEST keyring_linux 00:27:05.212 ************************************ 00:27:05.212 19:09:42 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:05.212 * Looking for test storage... 00:27:05.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:05.212 19:09:42 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:05.212 19:09:42 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:05.212 19:09:42 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:05.212 19:09:42 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:05.212 19:09:42 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:05.212 19:09:42 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:05.212 19:09:42 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:05.212 19:09:42 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:05.212 19:09:42 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:05.212 19:09:42 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:05.212 19:09:42 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:05.212 19:09:42 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:05.212 19:09:42 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:05.212 19:09:42 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:05.212 19:09:42 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:05.212 19:09:42 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:05.212 19:09:42 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:05.212 19:09:42 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:05.212 19:09:42 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:05.212 19:09:42 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:05.212 19:09:42 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:05.213 19:09:42 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.213 19:09:42 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.213 19:09:42 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.213 19:09:42 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.213 19:09:42 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.213 19:09:42 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:05.213 19:09:42 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.213 19:09:42 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:27:05.213 19:09:42 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:05.213 19:09:42 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:05.213 19:09:42 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:05.213 19:09:42 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:05.213 19:09:42 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:05.213 19:09:42 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:05.213 19:09:42 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:05.213 19:09:42 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:05.213 19:09:42 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:05.213 19:09:42 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:05.213 19:09:42 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:05.213 19:09:42 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:05.213 19:09:42 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:05.213 19:09:42 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:05.213 19:09:42 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:05.213 19:09:42 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:05.213 19:09:42 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:05.213 19:09:42 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:05.213 19:09:42 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:05.213 19:09:42 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:05.213 19:09:42 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:05.213 19:09:42 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:05.213 19:09:42 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:05.213 19:09:42 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:05.213 19:09:42 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:05.213 19:09:42 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:05.213 19:09:42 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:05.213 19:09:42 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:05.213 19:09:42 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:05.213 /tmp/:spdk-test:key0 00:27:05.213 19:09:42 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:05.213 19:09:42 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:05.213 19:09:42 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:05.213 19:09:42 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:05.213 19:09:42 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:05.213 19:09:42 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:05.213 19:09:42 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:05.213 19:09:42 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:05.213 19:09:42 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:05.213 19:09:42 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:05.213 19:09:42 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:05.213 19:09:42 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:05.213 19:09:42 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:05.213 19:09:42 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:05.213 19:09:42 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:05.213 /tmp/:spdk-test:key1 00:27:05.213 19:09:42 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3281382 00:27:05.213 19:09:42 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:05.213 19:09:42 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3281382 00:27:05.213 19:09:42 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3281382 ']' 00:27:05.213 19:09:42 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.213 19:09:42 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:05.213 19:09:42 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.213 19:09:42 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:05.213 19:09:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:05.213 [2024-07-24 19:09:42.813529] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:27:05.213 [2024-07-24 19:09:42.813622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281382 ] 00:27:05.472 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.472 [2024-07-24 19:09:42.874142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.472 [2024-07-24 19:09:42.977376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.731 19:09:43 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:05.731 19:09:43 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:27:05.731 19:09:43 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:05.731 19:09:43 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.731 19:09:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:05.731 [2024-07-24 19:09:43.224284] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.731 null0 00:27:05.731 [2024-07-24 19:09:43.256363] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:05.732 [2024-07-24 19:09:43.256802] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:05.732 19:09:43 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.732 19:09:43 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:05.732 962445437 00:27:05.732 19:09:43 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:05.732 739253312 00:27:05.732 19:09:43 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3281399 00:27:05.732 19:09:43 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:05.732 19:09:43 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3281399 /var/tmp/bperf.sock 00:27:05.732 19:09:43 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3281399 ']' 00:27:05.732 19:09:43 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:05.732 19:09:43 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:05.732 19:09:43 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:05.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:05.732 19:09:43 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:05.732 19:09:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:05.732 [2024-07-24 19:09:43.322917] Starting SPDK v24.09-pre git sha1 0a6bb28fa / DPDK 24.03.0 initialization... 00:27:05.732 [2024-07-24 19:09:43.322995] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281399 ] 00:27:05.990 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.991 [2024-07-24 19:09:43.387527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.991 [2024-07-24 19:09:43.508455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.991 19:09:43 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:05.991 19:09:43 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:27:05.991 19:09:43 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:05.991 19:09:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:06.249 19:09:43 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:06.249 19:09:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:06.816 19:09:44 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:06.816 19:09:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:06.816 [2024-07-24 19:09:44.350224] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:07.074 nvme0n1 00:27:07.074 19:09:44 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:07.074 19:09:44 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:07.074 19:09:44 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:07.074 19:09:44 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:07.074 19:09:44 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:07.074 19:09:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:07.331 19:09:44 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:07.331 19:09:44 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:07.331 19:09:44 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:07.331 19:09:44 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:07.331 19:09:44 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:07.331 19:09:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:07.331 19:09:44 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:07.589 19:09:44 keyring_linux -- keyring/linux.sh@25 -- # sn=962445437 00:27:07.589 19:09:44 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:07.589 19:09:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:07.589 19:09:44 keyring_linux -- keyring/linux.sh@26 -- # [[ 962445437 == \9\6\2\4\4\5\4\3\7 ]] 00:27:07.589 19:09:44 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 962445437 00:27:07.589 19:09:44 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:07.589 19:09:44 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:07.589 Running I/O for 1 seconds... 00:27:08.524 00:27:08.524 Latency(us) 00:27:08.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.524 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:08.524 nvme0n1 : 1.02 4693.72 18.33 0.00 0.00 27009.87 12524.66 38641.97 00:27:08.524 =================================================================================================================== 00:27:08.524 Total : 4693.72 18.33 0.00 0.00 27009.87 12524.66 38641.97 00:27:08.524 0 00:27:08.524 19:09:46 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:08.524 19:09:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:08.782 19:09:46 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:27:08.782 19:09:46 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:27:08.782 19:09:46 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:08.782 19:09:46 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:08.782 19:09:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:08.782 19:09:46 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:09.042 19:09:46 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:27:09.042 19:09:46 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:09.042 19:09:46 keyring_linux -- keyring/linux.sh@23 -- # return 00:27:09.042 19:09:46 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:09.042 19:09:46 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:27:09.042 19:09:46 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:09.042 19:09:46 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:27:09.042 19:09:46 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:09.042 19:09:46 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:27:09.042 19:09:46 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:09.042 19:09:46 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:09.042 19:09:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:09.301 [2024-07-24 19:09:46.828884] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:09.301 [2024-07-24 19:09:46.829019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb78030 (107): Transport endpoint is not connected 00:27:09.301 [2024-07-24 19:09:46.830012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb78030 (9): Bad file descriptor 00:27:09.301 [2024-07-24 19:09:46.831010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:09.302 [2024-07-24 19:09:46.831042] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:09.302 [2024-07-24 19:09:46.831057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:09.302 request: 00:27:09.302 { 00:27:09.302 "name": "nvme0", 00:27:09.302 "trtype": "tcp", 00:27:09.302 "traddr": "127.0.0.1", 00:27:09.302 "adrfam": "ipv4", 00:27:09.302 "trsvcid": "4420", 00:27:09.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:09.302 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:09.302 "prchk_reftag": false, 00:27:09.302 "prchk_guard": false, 00:27:09.302 "hdgst": false, 00:27:09.302 "ddgst": false, 00:27:09.302 "psk": ":spdk-test:key1", 00:27:09.302 "method": "bdev_nvme_attach_controller", 00:27:09.302 "req_id": 1 00:27:09.302 } 00:27:09.302 Got JSON-RPC error response 00:27:09.302 response: 00:27:09.302 { 00:27:09.302 "code": -5, 00:27:09.302 "message": "Input/output error" 00:27:09.302 } 00:27:09.302 19:09:46 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:27:09.302 19:09:46 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:09.302 19:09:46 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:09.302 19:09:46 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:09.302 19:09:46 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:27:09.302 19:09:46 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:09.302 19:09:46 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:27:09.302 19:09:46 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:27:09.302 19:09:46 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:27:09.302 19:09:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:09.302 19:09:46 keyring_linux -- keyring/linux.sh@33 -- # sn=962445437 00:27:09.302 19:09:46 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 962445437 00:27:09.302 1 links removed 00:27:09.302 19:09:46 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:09.302 19:09:46 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:27:09.302 19:09:46 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:27:09.302 19:09:46 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:27:09.302 19:09:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:27:09.302 19:09:46 keyring_linux -- keyring/linux.sh@33 -- # sn=739253312 00:27:09.302 19:09:46 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 739253312 00:27:09.302 1 links removed 00:27:09.302 19:09:46 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3281399 00:27:09.302 19:09:46 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3281399 ']' 00:27:09.302 19:09:46 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3281399 00:27:09.302 19:09:46 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:27:09.302 19:09:46 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:09.302 19:09:46 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3281399 00:27:09.302 19:09:46 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:09.302 19:09:46 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:09.302 19:09:46 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3281399' 00:27:09.302 killing process with pid 3281399 00:27:09.302 19:09:46 keyring_linux -- common/autotest_common.sh@969 -- # kill 3281399 00:27:09.302 Received shutdown signal, test time was about 1.000000 seconds 00:27:09.302 00:27:09.302 Latency(us) 00:27:09.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.302 =================================================================================================================== 00:27:09.302 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:09.302 19:09:46 keyring_linux -- common/autotest_common.sh@974 -- # wait 3281399 00:27:09.562 19:09:47 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3281382 00:27:09.562 19:09:47 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3281382 ']' 00:27:09.562 19:09:47 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3281382 00:27:09.562 19:09:47 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:27:09.562 19:09:47 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:09.562 19:09:47 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3281382 00:27:09.562 19:09:47 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:09.562 19:09:47 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:09.562 19:09:47 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3281382' 00:27:09.562 killing process with pid 3281382 00:27:09.562 19:09:47 keyring_linux -- common/autotest_common.sh@969 -- # kill 3281382 00:27:09.562 19:09:47 keyring_linux -- common/autotest_common.sh@974 -- # wait 3281382 00:27:10.130 00:27:10.130 real 0m4.952s 00:27:10.130 user 0m9.271s 00:27:10.130 sys 0m1.528s 00:27:10.130 19:09:47 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:10.130 19:09:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:10.130 ************************************ 00:27:10.130 END TEST keyring_linux 00:27:10.130 ************************************ 00:27:10.130 19:09:47 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:10.130 19:09:47 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:10.130 19:09:47 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:27:10.130 19:09:47 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:27:10.130 19:09:47 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:27:10.130 19:09:47 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:10.130 19:09:47 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:10.130 19:09:47 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:10.130 19:09:47 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:27:10.130 19:09:47 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:10.131 19:09:47 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:27:10.131 19:09:47 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:10.131 19:09:47 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:10.131 19:09:47 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:10.131 19:09:47 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:27:10.131 19:09:47 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:27:10.131 19:09:47 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:27:10.131 19:09:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:10.131 19:09:47 -- common/autotest_common.sh@10 -- # set +x 00:27:10.131 19:09:47 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:27:10.131 19:09:47 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:27:10.131 19:09:47 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:27:10.131 19:09:47 -- common/autotest_common.sh@10 -- # set +x 00:27:12.033 INFO: APP EXITING 00:27:12.033 INFO: killing all VMs 00:27:12.033 INFO: killing vhost app 00:27:12.033 INFO: EXIT DONE 00:27:12.969 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:27:13.228 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:27:13.228 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:27:13.228 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:27:13.228 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:27:13.228 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:27:13.228 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:27:13.228 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:27:13.228 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:27:13.228 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:27:13.228 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:27:13.228 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:27:13.228 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:27:13.228 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:27:13.228 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:27:13.228 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:27:13.228 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:27:14.605 Cleaning 00:27:14.605 Removing: /var/run/dpdk/spdk0/config 00:27:14.605 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:14.605 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:14.605 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:14.605 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:14.605 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:14.605 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:14.605 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:14.605 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:14.605 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:14.605 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:14.605 Removing: /var/run/dpdk/spdk1/config 00:27:14.605 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:14.605 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:14.605 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:14.605 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:14.605 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:14.605 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:14.605 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:14.605 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:14.605 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:14.605 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:14.605 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:14.605 Removing: /var/run/dpdk/spdk2/config 00:27:14.605 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:14.605 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:14.605 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:14.605 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:14.605 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:14.605 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:14.605 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:14.605 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:14.605 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:14.605 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:14.605 Removing: /var/run/dpdk/spdk3/config 00:27:14.605 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:14.605 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:14.605 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:14.605 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:14.605 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:14.605 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:14.605 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:14.605 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:14.605 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:14.605 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:14.605 Removing: /var/run/dpdk/spdk4/config 00:27:14.605 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:14.605 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:14.605 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:14.605 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:14.605 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:14.605 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:14.605 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:14.605 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:14.605 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:14.605 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:14.605 Removing: /dev/shm/bdev_svc_trace.1 00:27:14.605 Removing: /dev/shm/nvmf_trace.0 00:27:14.605 Removing: /dev/shm/spdk_tgt_trace.pid3024298 00:27:14.605 Removing: /var/run/dpdk/spdk0 00:27:14.605 Removing: /var/run/dpdk/spdk1 00:27:14.605 Removing: /var/run/dpdk/spdk2 00:27:14.605 Removing: /var/run/dpdk/spdk3 00:27:14.605 Removing: /var/run/dpdk/spdk4 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3022734 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3023470 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3024298 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3024722 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3025408 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3025678 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3026398 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3026409 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3026664 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3027858 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3028906 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3029220 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3029408 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3029614 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3029804 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3030068 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3030240 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3030423 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3030734 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3033085 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3033247 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3033542 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3033643 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3033985 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3034112 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3034421 00:27:14.605 Removing: /var/run/dpdk/spdk_pid3034559 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3034850 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3034879 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3035153 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3035204 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3035662 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3035816 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3036007 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3038085 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3040716 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3048308 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3048724 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3051357 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3051513 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3054152 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3057866 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3060051 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3066589 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3071797 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3072998 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3073660 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3084636 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3086916 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3112748 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3116028 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3120094 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3124454 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3124457 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3125108 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3125759 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3126303 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3126706 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3126827 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3126964 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3127097 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3127109 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3127762 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3128297 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3128955 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3129357 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3129359 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3129621 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3130515 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3131236 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3136688 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3162170 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3164959 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3166134 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3167447 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3167515 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3167609 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3167741 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3168186 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3169498 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3170429 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3170906 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3173025 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3173453 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3173897 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3176411 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3182433 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3185104 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3188871 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3189821 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3190918 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3193619 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3195987 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3200202 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3200204 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3203037 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3203237 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3203371 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3203640 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3203645 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3206413 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3206810 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3209512 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3212000 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3215414 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3218879 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3225229 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3229704 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3229707 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3242043 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3242455 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3243093 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3243635 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3244589 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3244993 00:27:14.864 Removing: /var/run/dpdk/spdk_pid3245403 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3245848 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3248323 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3248588 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3252374 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3252557 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3254166 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3259088 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3259208 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3262104 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3263431 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3264799 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3265650 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3267062 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3267932 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3273344 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3273770 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3274238 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3276180 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3276574 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3276975 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3279417 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3279437 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3280902 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3281382 00:27:14.865 Removing: /var/run/dpdk/spdk_pid3281399 00:27:14.865 Clean 00:27:15.131 19:09:52 -- common/autotest_common.sh@1451 -- # return 0 00:27:15.131 19:09:52 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:27:15.131 19:09:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:15.131 19:09:52 -- common/autotest_common.sh@10 -- # set +x 00:27:15.131 19:09:52 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:27:15.131 19:09:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:15.131 19:09:52 -- common/autotest_common.sh@10 -- # set +x 00:27:15.131 19:09:52 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:15.131 19:09:52 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:27:15.131 19:09:52 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:27:15.131 19:09:52 -- spdk/autotest.sh@395 -- # hash lcov 00:27:15.131 19:09:52 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:15.131 19:09:52 -- spdk/autotest.sh@397 -- # hostname 00:27:15.131 19:09:52 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:27:15.461 geninfo: WARNING: invalid characters removed from testname! 00:27:47.525 19:10:20 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:47.525 19:10:24 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:50.064 19:10:27 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:53.345 19:10:30 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:55.872 19:10:33 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:00.049 19:10:37 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:02.576 19:10:39 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:02.576 19:10:39 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:02.576 19:10:39 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:02.576 19:10:39 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:02.576 19:10:39 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:02.576 19:10:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.576 19:10:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.576 19:10:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.576 19:10:39 -- paths/export.sh@5 -- $ export PATH 00:28:02.576 19:10:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.576 19:10:39 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:02.576 19:10:39 -- common/autobuild_common.sh@447 -- $ date +%s 00:28:02.576 19:10:39 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721841039.XXXXXX 00:28:02.576 19:10:39 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721841039.emKIPf 00:28:02.576 19:10:39 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:28:02.576 19:10:39 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:28:02.576 19:10:39 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:02.576 19:10:39 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:02.576 19:10:39 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:02.576 19:10:39 -- common/autobuild_common.sh@463 -- $ get_config_params 00:28:02.576 19:10:39 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:28:02.576 19:10:39 -- common/autotest_common.sh@10 -- $ set +x 00:28:02.576 19:10:39 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:02.576 19:10:39 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:28:02.576 19:10:39 -- pm/common@17 -- $ local monitor 00:28:02.576 19:10:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:02.576 19:10:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:02.576 19:10:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:02.576 19:10:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:02.576 19:10:39 -- pm/common@21 -- $ date +%s 00:28:02.576 19:10:39 -- pm/common@25 -- $ sleep 1 00:28:02.576 19:10:39 -- pm/common@21 -- $ date +%s 00:28:02.576 19:10:39 -- pm/common@21 -- $ date +%s 00:28:02.576 19:10:39 -- pm/common@21 -- $ date +%s 00:28:02.576 19:10:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721841039 00:28:02.576 19:10:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721841039 00:28:02.576 19:10:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721841039 00:28:02.576 19:10:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721841039 00:28:02.576 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721841039_collect-vmstat.pm.log 00:28:02.576 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721841039_collect-cpu-load.pm.log 00:28:02.577 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721841039_collect-cpu-temp.pm.log 00:28:02.577 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721841039_collect-bmc-pm.bmc.pm.log 00:28:03.513 19:10:40 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:28:03.513 19:10:40 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:28:03.513 19:10:40 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:03.513 19:10:40 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:03.513 19:10:40 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:03.513 19:10:40 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:03.513 19:10:40 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:03.513 19:10:40 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:03.513 19:10:40 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:03.513 19:10:41 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:03.513 19:10:41 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:03.513 19:10:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:03.513 19:10:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:03.513 19:10:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:03.513 19:10:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:03.513 19:10:41 -- pm/common@44 -- $ pid=3291069 00:28:03.513 19:10:41 -- pm/common@50 -- $ kill -TERM 3291069 00:28:03.513 19:10:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:03.513 19:10:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:03.513 19:10:41 -- pm/common@44 -- $ pid=3291071 00:28:03.513 19:10:41 -- pm/common@50 -- $ kill -TERM 3291071 00:28:03.513 19:10:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:03.513 19:10:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:03.513 19:10:41 -- pm/common@44 -- $ pid=3291073 00:28:03.513 19:10:41 -- pm/common@50 -- $ kill -TERM 3291073 00:28:03.513 19:10:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:03.513 19:10:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:03.513 19:10:41 -- pm/common@44 -- $ pid=3291100 00:28:03.513 19:10:41 -- pm/common@50 -- $ sudo -E kill -TERM 3291100 00:28:03.513 + [[ -n 2939058 ]] 00:28:03.513 + sudo kill 2939058 00:28:03.523 [Pipeline] } 00:28:03.544 [Pipeline] // stage 00:28:03.550 [Pipeline] } 00:28:03.564 [Pipeline] // timeout 00:28:03.570 [Pipeline] } 00:28:03.582 [Pipeline] // catchError 00:28:03.586 [Pipeline] } 00:28:03.598 [Pipeline] // wrap 00:28:03.602 [Pipeline] } 00:28:03.613 [Pipeline] // catchError 00:28:03.623 [Pipeline] stage 00:28:03.625 [Pipeline] { (Epilogue) 00:28:03.638 [Pipeline] catchError 00:28:03.640 [Pipeline] { 00:28:03.655 [Pipeline] echo 00:28:03.657 Cleanup processes 00:28:03.663 [Pipeline] sh 00:28:03.947 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:03.947 3291204 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:03.947 3291334 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:03.961 [Pipeline] sh 00:28:04.243 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:04.243 ++ grep -v 'sudo pgrep' 00:28:04.243 ++ awk '{print $1}' 00:28:04.243 + sudo kill -9 3291204 00:28:04.255 [Pipeline] sh 00:28:04.575 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:14.568 [Pipeline] sh 00:28:14.850 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:14.850 Artifacts sizes are good 00:28:14.862 [Pipeline] archiveArtifacts 00:28:14.869 Archiving artifacts 00:28:15.051 [Pipeline] sh 00:28:15.331 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:15.345 [Pipeline] cleanWs 00:28:15.355 [WS-CLEANUP] Deleting project workspace... 00:28:15.355 [WS-CLEANUP] Deferred wipeout is used... 00:28:15.361 [WS-CLEANUP] done 00:28:15.363 [Pipeline] } 00:28:15.385 [Pipeline] // catchError 00:28:15.398 [Pipeline] sh 00:28:15.679 + logger -p user.info -t JENKINS-CI 00:28:15.688 [Pipeline] } 00:28:15.705 [Pipeline] // stage 00:28:15.711 [Pipeline] } 00:28:15.729 [Pipeline] // node 00:28:15.736 [Pipeline] End of Pipeline 00:28:15.774 Finished: SUCCESS